id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.03688#58
AgentBench: Evaluating LLMs as Agents
J.1.1 Motivation of Validity Analysis . . J.1.2 Definition of Validity Analysis . J.1.3 Validity Analysis of Models . . Findings . . . . J.2.1 . . . Instruction Following Matters . J.2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 21 22 22 24 24 25 25 26 26 27 29 29 30 31 33 33 33 33 34 37 37 38 38 38 39 41 41 42 44 44 44 44 44 44 44
2308.03688#57
2308.03688#59
2308.03688
[ "2204.02311" ]
2308.03688#59
AgentBench: Evaluating LLMs as Agents
17 20 20 20 20 Technical Report (v0.2) . . . . # J.2.2 Consistency and Execution in Agent Planning . J.2.3 J.2.4 . the Influence of Code Tuning on LLM Acting as Agents . . . Self Correcting ability . . . . . . . . . . . . . . . . . . . . . . J.2.3_ the Influence of Code Tuning on LLM Acting as Agents ......... . . . . . . . . . . . . . . . . 18 . . . 46 53 53
2308.03688#58
2308.03688#60
2308.03688
[ "2204.02311" ]
2308.03688#60
AgentBench: Evaluating LLMs as Agents
Technical Report (v0.2) (a) Operating System (OS) Task: â Find the total number of non-empty directo- ries inside the â /etcâ directory.â Action Space: Any valid bash commands Observation: System standard output (b) Database (DB) Task: â What was the total number of medals won by United States?â , given the table â Olympic Medalsâ Action space: Any valid SQL commands Observation: MySQL CLI interface output (c) Knowledge Graph (KG) Task: â
2308.03688#59
2308.03688#61
2308.03688
[ "2204.02311" ]
2308.03688#61
AgentBench: Evaluating LLMs as Agents
Find tropical cyclones that are similar to Hur- ricane Marie and affected Eastern North America.â Action space: Basic KG-querying tools Observation: Query results (d) Digital Card Game (DCG) Task: â Compete against another player using four â fishâ cards in â Aquawarâ game.â Action space: Four â fishâ cards and Assertion Observation: Battle process, status of â fishâ (e) Lateral Thinking Puzzles (LTP) Task: â A man sleeps with the lights off, and the next morning he suicides after opening windows. Why?â Action Space: Any binary questions Observation: â Yesâ , â Noâ , or â Irrelevantâ = (f) House-holding (HH) Task: â Clean some soapbar and put it in coutertopâ Action space: A list of allowed actions in the room, or other accessible rooms Observation: Results after the action.
2308.03688#60
2308.03688#62
2308.03688
[ "2204.02311" ]
2308.03688#62
AgentBench: Evaluating LLMs as Agents
Lodge Bedspread Full/Queen Size Quilt with 2 Shams. Cabin 3-Piece Reversible All Season Quilt Set. Rustic Quilt Coverlet Bed Set. | Stonehurst Collection, (g) Web Shopping (WS) Task: â Looking for a queen size bedspread set in the color redwood, and price lower than 70.â Action space: Search (generate keywords) and Click (choose from all clickable buttons) Observation: Productsâ descriptions; the webpage (h) Web Browsing (WB) Task: â
2308.03688#61
2308.03688#63
2308.03688
[ "2204.02311" ]
2308.03688#63
AgentBench: Evaluating LLMs as Agents
Find a latest post with more than 10k upvotes in r/announcements community and upvote it.â Action space: 1) Choose one out of all HTML ele- ments in the webpage; 2) Click, Type, or Select Options Observation: Page HTML (optional: screenshot) Figure 4: Examples of all environments in AGENTBENCH. 19 Technical Report (v0.2) A FRAMEWORK A.1 TRADITIONAL EVALUATION FRAMEWORKS Traditional evaluation frameworks can be categorized into two types: Traditional Tasks (e.g., single-turn generation, classification, etc.). These frameworks are designed for specific tasks and may not be suitable for more complex tasks involving multi-turn interactions. Agent-based Tasks (tasks with multi-turn interactions). These frameworks are typically tailored to a specific task by the creators of the dataset. They often suffer from several limitations: They are designed for a specific task, limiting their applicability to other tasks. â ¢ Communication between components (Task, Agent, and Evaluation) usually occurs within a single process or through the creation of child processes, necessitating evaluation on the same device. â ¢ They can only evaluate one task with one agent at a time. A.2 OUR DESIGNED EVALUATION FRAMEWORK To address the limitations of traditional agent-based evaluation frameworks, we have designed a novel framework with the following features: Decoupled S/C Architecture. Our framework decouples the Task Server, Agent Server, and Evalua- tion Client components, enabling separate deployments. They can communicate via HTTP interac- tions, allowing them to run on different devices, thus eliminating the need for co-location to satisfy the requirements of both Task and Agent. Agent-Task Collaborative Evaluation. Our framework supports collaborative evaluation of multiple agents and tasks in various combinations simultaneously. This flexibility enables more comprehensive testing scenarios. Network Flow Algorithms. We have incorporated network flow algorithms into the Evaluation Client, maximizing evaluation efficiency. This optimization ensures that both Agent and Task Workers are utilized to their fullest potential. Resumable Evaluation. Our framework includes a resumable evaluation feature, making it easy to recover and continue interrupted evaluations seamlessly. With these advancements, our evaluation framework overcomes the limitations of traditional ap- proaches and provides a more versatile, efficient, and scalable solution for evaluating intelligent agents in multi-turn tasks. The overall structure of our framework can be described in Figure 5. A.3 # IMPLEMENTATION OF MAX-FLOW ALGORITHM
2308.03688#62
2308.03688#64
2308.03688
[ "2204.02311" ]
2308.03688#64
AgentBench: Evaluating LLMs as Agents
In our evaluation process, we employ the Edmondsâ Karp algorithm (Edmonds & Karp, 1972) as a practical implementation of the Fordâ Fulkerson method (Ford Jr & FuË lkerson, 1962) designed to compute the maximum flow in a network with a time complexity of O(|V ||E|2). To formalize the problem, consider a scenario with n agents, denoted as A1, A2, · · · , An, and m tasks, denoted as T1, T2, · · · , Tm. Our objective is to conduct evaluations in l different groups, each focusing on the pair (Axk , Tyk ), where 1 â ¤ k â ¤ l.
2308.03688#63
2308.03688#65
2308.03688
[ "2204.02311" ]
2308.03688#65
AgentBench: Evaluating LLMs as Agents
Additionally, for every such pair (Axk , Tyk ), we should evaluate sk samples. The number of workers for agent Ak and task Tk is denoted as w(Ak) and w(Tk) respectively. The flow graph we construct can be described as G =< V, E >, where the vertex set V is defined as V ={Ak|1 â ¤ k â ¤ n} â ª {Tk|1 â ¤ k â ¤ m} â ª {S, D}, (1) 20 Technical Report (v0.2) Workers Task Workers Agent a Mi a ubuntu Pa S > = & __Action % alle ~ Observation ie API API Maxflow Task Algorithm Controller Deployed Models {3 â &â 3 API] [API e | on = Agent Server Assigner Figure 5: The toolkit of AGENTBENCH is meticulously crafted for the seamless deployment of tasks and agents, coupled with an efficient evaluation assignment system. Agent servers (left) manifest in diverse forms, enabling us to deploy a model server and expose an accessible API through the HTTP protocol. Task servers (right) are composed of a task controller and several task workers, whose environment is within an isolated environment, ensuring freedom from conflicts and optimal task execution. Evaluation client (center) establishes an agent-task graph and employs the max-flow algorithm to optimize interactions. This optimization results in client workers seamlessly engaging with agent and task servers, facilitating the smooth execution of tasks and evaluations. And the weighted edge set E is denoted as E ={(Axk , Tyk , sk)|1 â ¤ k â ¤ l} â ª {(S, Ak, w(Ak)|1 â ¤ k â ¤ n} â ª {(Tk, D, w(Tk)|1 â ¤ k â ¤ m}. (2) We apply max-flow algorithm from source vertex S to destination vertex D. For each flow edge (Ai, Tj, f(i,j)), we allocate f(i,j) samples for agent Ai and task Tj. After allocation, the weight of the edges should be reduced by the value of flow. Upon completion of an evaluation, the weight of edge connected to either S or D should be increased by 1. We also establish a periodic interval for applying the algorithm to the network for newly available evaluation triples. # B OPERATING SYSTEM B.1 DATASET DETAILS Construction Details. Each evaluation sample in OS dataset encompasses following contents: Instruction.
2308.03688#64
2308.03688#66
2308.03688
[ "2204.02311" ]
2308.03688#66
AgentBench: Evaluating LLMs as Agents
The description of the problem in natural language that needs LLMs to solve. â ¢ Docker Environment. local-os/default). # default Initialization Script (Optional). The bash scripts that need to be executed independently (docker exec) before the interaction starts (e.g., user configurations, files, system statuses). â ¢ Start Script (Optional). The bash scripts executed after shell is created and before interaction. â ¢ Checking Pipeline. The checking method to judge the correctness of LLMs answer or operation. â ¢ Example Script (Optional). The bash scripts that serve as reference solutions. In other words, if executing them in the interaction, results are correct. Only for unit tests that introduced below. We design two types of tasks in the OS evaluation beyond conventional QA-only evaluation. Question Answering (QA): LLMs need to output commands to solve specific questions in OS (e.g., aggregate numbers, view file contents). In this case, they must commit answers finally. â ¢ Operation: LLMs need to output commands to do some verifiable operations on the operating system (e.g., change file/user states). In this case, they do not need to commit final answers. Thanks to the checking pipeline, two types of tasks can be evaluated in a unified solution.
2308.03688#65
2308.03688#67
2308.03688
[ "2204.02311" ]
2308.03688#67
AgentBench: Evaluating LLMs as Agents
21 # Technical Report (v0.2) Collecting challenging queries regarding OS could be difficult. In practice, about half of our instructions are created or collected from humans, while the other half are mostly QA problems generated by gpt-4 and strictly filtered by passing the unit tests (i.e., yield correct answers/states). For human instructions, we first gather 6000 real problems and solutions with bash or shell tag from Stack Overflow3. Then we sort them by the score (count of likes). We invite 8 annotators majored in programming to select challenging ones. For each selected problem, they create one or more task instructions and write a detailed problem description, the initialization script, the starting script, and the checking pipeline. Finally, we conduct a cross verification for each evaluation sample to make sure itâ s correct. For each problem, it takes about 2 hours to do the annotation. For generated problems, our unit test contains the following parts. 1) Initialization Script Correction: we execute the initialization script and remove samples with wrong initialization whose exit code does not equal to 0. 2) Example Code Correction: we execute the example code and the checking pipeline to judge the correctness of the answer. We remove samples with wrong answers. In the end, we curate 144 high-quality diverse OS evaluation samples accompanied with testing interactive environments and corresponding checking pipelines (i.e., scripts). Agents are prompted with 1-shot CoT to better format their responses (Cf. Appendix B). Evaluation Setup. For each problem (i.e., instruction), the execution can be divided into 3 parts. â ¢ Initialization. We create a docker container with a specific image, and we run an initialization bash script to set up environments specified by the instruction. â ¢ Interaction.
2308.03688#66
2308.03688#68
2308.03688
[ "2204.02311" ]
2308.03688#68
AgentBench: Evaluating LLMs as Agents
We start a new shell in this docker, and run the starting bash script specified by the instruction. Then the LLM to test is fed with a piece of instruction and the problem description. It starts interaction with the shell. In each turn, two actions are provides. One is to run bash script, which allows the model to generate and run a series of commands in the shell. The other is to commit answer, which allows the model to terminate the interaction process. Itâ s notable that the model will be judged that it fail to solve the problem if exceeding round limit (8 by default). there is a checking pipeline containing a list of scripts f1, f2, · · · , fn, where fk denotes the k-th script piece in the pipeline. For fk, the answer of the model, o0, and the output of ft(t < k), ot, will be fed as input arguments into fk, i.e., ok = fk(o0, o1, · · · , okâ 1). The result is correct if and only if all the scripts exit with code 0. Metrics. We measure the Success Rate for LLMs to solve problems in the execution. There are only two final status for each item of the problems, wrong or correct. B.2 ACTIONS In OS evaluation, we design two major types of actions: bash and commit. Bash: which launches a bash command (using textual inputs in content field) â ¢ Commit: which announces the completion of the goal. If the task is a QA problem, then the agent should submit the final answer in content field; else the checking pipeline will automatically check the system status to judge the correctness. B.3 PROMPT EXAMPLE A prompt for OS evaluation consists of the instruction and the formulation of interaction trajectory. An example of instruction prompt is: You are an assistant that will act like a person, Iâ will play the role of linux(ubuntu) operating system. Your goal is to implement the operations required by me or answer to the question proposed by me. For each of your turn, you should first think what you should do, and then take exact one of the three actions: "bash", "finish" or " answer". 1. If you think you should execute some bash code, take bash action, and you should print like this: # 3https://stackoverflow.com/ 22 Technical Report (v0.2)
2308.03688#67
2308.03688#69
2308.03688
[ "2204.02311" ]
2308.03688#69
AgentBench: Evaluating LLMs as Agents
Think: put your thought here. Act: bash â â â bash # put your bash code here â â â 2. If you think you have finished the task, take finish action, and you should print like this: Think: put your thought here. Act: finish 3. If you think you have got the answer to the question, take answer action, and you should print like this: Think: put your thought here. Act: answer(Your answer to the question should be put in this pair of parentheses) If the output is too long, I will truncate it. The truncated output is not complete. You have to deal with the truncating problem by yourself. Attention, your bash code should not contain any input operation. Once again, you should take only exact one of the three actions in each turn. The trajectory is organized in CoT styles, and we use an 1-shot example to make model better understand the action space like the following. <USER>: {{ INSTRUCTION }} Now, my problem is: Tell me how many files are in the directory "/etc"? <AGENT>: Think: To count the files in /etc, I need to print all the files in it. Act: bash â â â bash ls /etc â â â <USER>: The output of the OS: cpi cron.hourly fuse.conf iproute2 lvm networkd-dispatcher protocols selinux tmpfiles.d [truncated because the output is too long] <AGENT>: Think: The output has been truncated because it is too long, so I need to count files by script directly. Act: bash â â â bash ls -1 /etc | wc -l â â â 23 Technical Report (v0.2) <USER>: The output of the OS: 220 <AGENT>: Think: Now I get the answer, it is 220. Act: answer(220) <USER>: Now, I will start a new problem in a new OS. My problem is: {{ PROBLEM }} <LOOP>: <AGENT>: {{ RESPONSE }} Swicth AGENT.Action: "bash": <User>: The output of the OS: {{ OUTPUT }} "answer" or "finish": Executing Checking Script Exit Loop otherwise: Exit Loop C DATABASE C.1 DATASET DETAILS Construction Details. We acquire the source queries and databases via reusing and amalgamating several established datasets:
2308.03688#68
2308.03688#70
2308.03688
[ "2204.02311" ]
2308.03688#70
AgentBench: Evaluating LLMs as Agents
WikiSQL (Zhong et al., 2017), WikiTableQuestions (Pasupat & Liang, 2015), SQA (Iyyer et al., 2017), HybridaQA (Chen et al., 2020), and FeTaQA (Nan et al., 2021), ensuring the diversity of instructions and data. To further enrich (and avoid leakage from) the dataset, we employed gpt-3.5-turbo to per- form data augmentation. Provided with the header information and original rows of a table, gpt-3.5-turbo generates ten new rows. Using the name, header information, and some SQL examples, we task gpt-3.5-turbo with generating five additional SQL queries. Each acquired SQL statement is then fed sequentially into gpt-3.5-turbo with instructions to rephrase the sentences without changing their original meanings. The valid entries are filtered and sampled into the final dataset with 1599 entries, categorized into three basic types of DB operations: select, insert, or update. # As a result, each sample in the dataset comprises:
2308.03688#69
2308.03688#71
2308.03688
[ "2204.02311" ]
2308.03688#71
AgentBench: Evaluating LLMs as Agents
Instruction. A piece of description delineating the problem and guiding the agentâ s action. â ¢ Table Info. Explanations about the table name and column names (i.e., meta information). â ¢ Table Content. The actual contents within the table, utilized to create the database. â ¢ Correct Answer. For selection-type samples, it is a text answer; for other entry types (i.e., insert, update), it is the hash code of the correctly modified table. Evaluation Setup. We assess each problem in the dataset through the following procedure: Initialization. An initial SQL script is constructed based on the table content, and a MySQL database is initialized in a docker container, which provides a forwarded port for interaction. â ¢ Interaction. An initial prompt guides the agent to provide an executable SQL command along with its reasoning. The agent is provided with the prompt, instruction, and table information description, and it is expected to return a response in given format. We execute the SQL and
2308.03688#70
2308.03688#72
2308.03688
[ "2204.02311" ]
2308.03688#72
AgentBench: Evaluating LLMs as Agents
24 # Technical Report (v0.2) directly return the result to the agent, continuing this loop until the agent commits its final answer or encounters an error (e.g., reaching the maximum round limit or failing to parse the action). â ¢ Checking. For selection-type problems, we compare the agentâ s answer with the standard text answer, disregarding the order, but expecting an exact match. If the answer is a single number, all equivalent representations are accepted (e.g., 5, "5.0", â +5â are considered identical). For insertion or updating types of problems, we calculate and compare the hash of the table after the agentâ s operation with the hash of the table after the correct SQL operation. Metrics. We measure the Success Rate of agents in completing instructions. Overall success rate is the macro average of the rate of three categories.
2308.03688#71
2308.03688#73
2308.03688
[ "2204.02311" ]
2308.03688#73
AgentBench: Evaluating LLMs as Agents
C.2 DATA AUGMENTATION We elaborate on the data augmentation of three types of DB tasks based on the existing SQL datasets (Zhong et al., 2017; Pasupat & Liang, 2015; Iyyer et al., 2017; Chen et al., 2020; Nan et al., 2021), which are all QA problems without some common operations including inserting and updating. We first tested the validity of the raw data and then randomly sample from each category from filtered data to form the final dataset. We adopt gpt-3.5-turbo to enrich and rewrite the original instructions.
2308.03688#72
2308.03688#74
2308.03688
[ "2204.02311" ]
2308.03688#74
AgentBench: Evaluating LLMs as Agents
â ¢ Insert: Given the name, the header information, and the original rows of a table, we generate 5 SQL statements for insertion. Later we rephrase the sentences without changing their meaning (using shorter or longer expressions or changing the order). â ¢ Update: Given the name, the header information, and the previously generated 5 SQL statements for insertion, we generate 5 SQL statements for modification based on the given statements. We rephrase the sentences following the above standard. To ensure data quality, each augmented query statement are required to pass the unit test scripts. The query type of tasks fall into the traditional scope of Text-to-SQL evaluation, and we only sample and categorize for evaluation. Each query statement in existing datasets is classified into following types: â Countingâ , â Aggregation-MINâ , â Aggregation-MAXâ , â Aggregation-AVGâ , â Aggregation- SUMâ , â Rankingâ , or â Comparisonâ . Each one can only belong to one type. The remaining will be categorized as "Other". C.3 PROMPT EXAMPLE We use the following format of prompts: User: I will ask you a question, then you should help me operate a MySQL database with SQL to answer the question. You have to explain the problem and your solution to me and write down your thoughts. After thinking and explaining thoroughly, every round you can choose to operate or to answer. your operation should be like this: Action: Operation â â â
2308.03688#73
2308.03688#75
2308.03688
[ "2204.02311" ]
2308.03688#75
AgentBench: Evaluating LLMs as Agents
sql SELECT * FROM table WHERE condition; â â â You MUST put SQL in markdown format without any other comments. Your SQL should be in one line. Every time you can only execute one SQL statement. I will only execute the statement in the first SQL code block. Every time you write a SQL , I will execute it for you and give you the output. If you are done operating, and you want to commit your final answer, then write down: Action: Answer Final Answer: ["ANSWER1", "ANSWER2", ...] DO NOT write this pattern unless you are sure about your answer. I expect an accurate and correct answer. 25
2308.03688#74
2308.03688#76
2308.03688
[ "2204.02311" ]
2308.03688#76
AgentBench: Evaluating LLMs as Agents
Technical Report (v0.2) Your answer should be accurate. Your answer must be exactly the same as the correct answer. If the question is about modifying the database, then after done operation, your answer field can be anything. If your response cannot match any pattern I mentioned earlier, you will be judged as FAIL immediately. Your input will be raw MySQL response, you have to deal with it by yourself. # D KNOWLEDGE GRAPH D.1 DATASET DETAILS Construction Details. In an effort to gauge the decision-making abilities of LLMs, specifically their proficiency in long-term planning, we have meticulously compiled a dataset sourced from pre-existing knowledge base question answering (KBQA) datasets on FREEBASE, including GrailQA (Gu et al., 2021), ComplexWebQuestions (Talmor & Berant, 2018), and GraphQuestions (Su et al., 2016). We envisage KBQA as a tool learning setting, thereby outfitting the LLM with an array of KG- querying tools. By leveraging the S-expressions annotated in (Gu & Su, 2022), we can accurately establish the optimal sequence of tool applications corresponding to each question. In order to sustain a high degree of difficulty in the tasks, we have opted to preserve only those questions which necessitate a minimum of five instances of tool invocation. Through this rigorous selection methodology, we have accrued a dataset consisting of 1,663 questions. Each data entry in the dataset has the following fields: Input Question. A natural language utterance that involves intricate KG information seeking. â ¢ Topic Entities. A set of topic entities mentioned in the input question.
2308.03688#75
2308.03688#77
2308.03688
[ "2204.02311" ]
2308.03688#77
AgentBench: Evaluating LLMs as Agents
We obviate the need of performing entity linking, allowing the LLM to focus on long-term planning. Action Sequence. The gold action sequence (i.e., tool invocations) that leads to the target answer. â ¢ Gold Answer. The gold answer to the question, typically characterized by a set of KG entities. Note that, in contrast to interacting with databases in AgentBench, where the particulars and content of the database are integrated into the input, describing an extensive KG to the LLM is not particularly feasible. This task is characterized by a partially observable environment, which is a critical aspect of its nature. Evaluation Setup. To support our evaluation, we first host the latest version of FREEBASE using Virtuoso.4 Due to the complexity of SPARQL queries, we decide not to burden the LLM with crafting SPARQL queries by itself. Instead, we implement a series APIs that interface with the Virtuoso backend, allowing the LLM to query the KG more effortlessly. We use the first 500 tasks from the datest for evaluation. Each task, when successfully executed, should ideally proceed through the following phases.
2308.03688#76
2308.03688#78
2308.03688
[ "2204.02311" ]
2308.03688#78
AgentBench: Evaluating LLMs as Agents
â ¢ Initialization. We prompt the LLM with the concrete task description, including the concrete description of each KG-querying tool that we provide. Interaction. During this phase, the LLM is expected to invoke different tools to access the KG and accumulate the necessary information to respond accurately to the question. Importantly, the process is entirely autonomous, meaning the LLM determines the workflow entirely by itself. â ¢ Final Answer Prediction. During its interaction with the KG, the LLM may generate a list of variables, each one representing a unique set of entities. If the LLM determines that one particular variable should signify the final answer, it will present this variable as its output and conclude the task. Metrics. We use F1 score as the primary evaluation metric in our study, calculated by comparing the modelâ
2308.03688#77
2308.03688#79
2308.03688
[ "2204.02311" ]
2308.03688#79
AgentBench: Evaluating LLMs as Agents
s predicted answers to the gold standard answers. In addition to F1 score, we also use the Exact Match metric. However, unlike previous studies that measure Exact Match based on the logical form, we assess it based on the exact match between the predicted and gold answer sets. # 4https://github.com/dki-lab/Freebase-Setup 26 Technical Report (v0.2) Lastly, we also evaluate the Executability of the action sequences generated by the model. If the modelâ s action sequence produces any set of answers when executed, it scores 1.0 for Executability. If it fails to produce an answer, it scores 0. D.2 PROMPT EXAMPLE Task description: User: You are an agent that answers questions based on the knowledge stored in a knowledge base. To achieve this, you can use the following tools to query the KB.
2308.03688#78
2308.03688#80
2308.03688
[ "2204.02311" ]
2308.03688#80
AgentBench: Evaluating LLMs as Agents
1. get_relations(variable: var) -> list of relations A variable can be either an entity or a set of entities (i.e., the result variable can be either an entity or a set of entities (i.e., the result of a previous query). This function helps to navigate all relations of a previous query). This function helps to navigate all relations in the KB connected to the variable, so you can decide which relation is the most useful to find the answer to the question. A simple use case can be â
2308.03688#79
2308.03688#81
2308.03688
[ "2204.02311" ]
2308.03688#81
AgentBench: Evaluating LLMs as Agents
get_relations(Barack Obama)â , which finds all relations/edges starting from the entity Barack Obama. The argument of get_relations should always be an entity or a variable (e .g., #0) and not anything else. 2. get_neighbors(variable: var, relation: str) -> variable Given a variable, this function returns all entities connected to the variable via the given relation. Note that, get_neighbors() can only be used after get_relations() is used to find a set of viable relations. A simple use case can be â get_neighbors(Barack Obama, people.person. profession)â , which returns the profession of Obama in Freebase.
2308.03688#80
2308.03688#82
2308.03688
[ "2204.02311" ]
2308.03688#82
AgentBench: Evaluating LLMs as Agents
3. intersection(variable1: var, variable2: var) -> variable Given two variables, this function returns the intersection of the two variables. The two variables MUST be of the same type! 4. get_attributes(variable: var) -> list of attributes This function helps to find all numerical attributes of the variable. Please only use it if the question seeks for a superlative accumulation (i.e., argmax or argmin). 5. argmax(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the maximum value of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes. A simple use case can be â argmax(variable, age)â , which returns the oldest entity belonging to the variable. 6. argmin(variable: var, attribute: str) -> variable Given a variable, this function returns the entity with the minimum value of the given attribute. It can only be used after get_attributes() is used to find a set of viable attributes. A simple use case can be â argmin(variable, age)â , which returns the youngest entity belonging to the variable. 7. count(variable: var) -> int Given a variable, this function returns the number of entities belonging to the variable. After a variable is produced along the process, you need to judge whether a variable is the final answer to the question. Each variable is represented as an id starting from 0. For example, #0 is the first variable, #1 is the second variable, and so on. Once you find the answer, respond with â Final Answer: #idâ , where id is the id of the variable that you think is the final answer. For 27 Technical Report (v0.2) example, if you think #3 is the final answer, you MUST respond with â Final Answer: #3â . You can only take ONE action at a time!! After you get the observation from its execution, you can take another action. You can take at most 15 actions to find the answer to the question. Given the inherent complexity associated with enabling LLMs to query the KB, it has been observed that, in a zero-shot setting, LLMs struggle to generate any outputs of substantive relevance. As a result, we additionally provide a teaching example in our prompt: Here is a demo that you can use to get started.
2308.03688#81
2308.03688#83
2308.03688
[ "2204.02311" ]
2308.03688#83
AgentBench: Evaluating LLMs as Agents
Question: the bipropellant rocket engine with kerosene and gas-generator cycle is designed by who? # Entities: [Gas-generator cycle, Kerosene] Thought: I need to first find engines with gas-generator cycle. To acheive this, I will query the KB to find relations connected to the entity â Gas-generator cycleâ and see if any of them can help me find the answer. # Action: get_relations(Gas-generator cycle) Observation: [spaceflight.rocket_engine_cycle.rocket_engines, spaceflight .satellite.orbiting, spaceflight.rocket_stage.fuel, spaceflight. satellite_manufacturer.spacecraft_manufactured, automotive.fuel. engines] Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find the engines of Gas- generator cycle. Action: get_neighbors(Gas-generator cycle, spaceflight. rocket_engine_cycle.rocket_engines) Observation: variable #0, which are instances of spaceflight. bipropellant_rocket_engine Thought: I also need to find engines with kerosene. To acheive this, I will query the KB to find relations connected to the entity â Kerosene â . Action: get_relations(Kerosene) Observation: [spaceflight.satellite_manufacturer.spacecraft_manufactured, automotive.fuel.engines, spaceflight.rocket_engine_cycle. rocket_engines] Thought: From the above relations, I might use spaceflight. rocket_engine_cycle.rocket_engines to find engines with kerosene cycle. Action: get_neighbors(Kerosene, spaceflight.rocket_engine_cycle. rocket_engines) Observation: variable #1, which are instances of spaceflight. bipropellant_rocket_engine Thought: The engines with both gas-generator cycle and kerosene cycle shoule be the intersection of variable #0 and variable #1. Action: intersection(#0, #1)
2308.03688#82
2308.03688#84
2308.03688
[ "2204.02311" ]
2308.03688#84
AgentBench: Evaluating LLMs as Agents
28 Technical Report (v0.2) Observation: variable #2, which are instances of spaceflight. bipropellant_rocket_engine Thought: Now I need to find who designed such engines, which will be the final answer. To acheive this, I will query the KB to find relations connected to the variable #2. Action: get_relations(#2) Observation: [spaceflight.rocket_engine.manufactured_by, spaceflight. rocket_engine.designed_by, spaceflight.rocket_engine.design_period, spaceflight.rocket_engine.status] Thought: From the above relations, I might use spaceflight.rocket_engine. designed_by to find the designer of the engines. Action: get_neighbors(#2, spaceflight.rocket_engine.designed_by) Observation: variable #3, which are instances of pspaceflight. rocket_engine_designer Thought: #3 is the final answer to the question, which represents the target rocket engine designer. Final Answer: #3
2308.03688#83
2308.03688#85
2308.03688
[ "2204.02311" ]
2308.03688#85
AgentBench: Evaluating LLMs as Agents
# E DIGITAL CARD GAME E.1 DATASET DETAILS Construction Details. We use Aquawar framework as the basis for our interactive system. The first type of interaction is the action phase, where the model needs to select the fish it wants to act with and then choose the target for skill. To ensure the validity of model operations, we perform checks for valid actions. The second type of interaction is the guess phase, where we provide the model with known information, including fish species and skill descriptions, enemyâ s targets. We have two naive strategies (random and greedy search) for testing purposes. The following is a detailed definition and description of the game process.
2308.03688#84
2308.03688#86
2308.03688
[ "2204.02311" ]
2308.03688#86
AgentBench: Evaluating LLMs as Agents
â ¢ Player and Cards. It is a two-player battle game with four pet fishes (i.e., cards) in each team. The card pool consists of ten fish (Appendix E.2), and both players choose four definite fish to use before the start of the game. â ¢ Initial State. Each fish has 400 initial health, 200 initial attack power, active ability, and passive ability. Basic Rule. Players choose a live fish to use its active skill or normal attack on an enemy fish each round. All alive fishâ s passive ability will automatically trigger when meeting certain conditions. â ¢ Assertion Mechanism. The identity of a playerâ s fish is initially hidden. The counter-player can guess one of the playerâ s fishâ s identities each round. If the counter-player guesses correctly, the playerâ s fishâ s identity is revealed, and all its fish will get damaged. â ¢ Round Process. Within a round of the game, the player for that round will first assert the identity of one opponentâ
2308.03688#85
2308.03688#87
2308.03688
[ "2204.02311" ]
2308.03688#87
AgentBench: Evaluating LLMs as Agents
s fish that are alive and whose identities have not been revealed. If the assertion is correct, all of the opponentâ s fish that remain alive get damaged. Subsequently, the player for that round can command one alive fish to execute a normal attack or an active ability. Following this, any fish that meet the condition will unleash its passive ability. Victory Condition. The victory condition is to have more fish alive at the end of the game. To balance agent engagement and game complexity simultaneously, we designed two stages of game logic. We remove the assertions in the first stage while keeping assertions in the second stage. We test all the models on both the first and second stages separately and choose the average performance for final score.
2308.03688#86
2308.03688#88
2308.03688
[ "2204.02311" ]
2308.03688#88
AgentBench: Evaluating LLMs as Agents
29 # Technical Report (v0.2) We choose two naive playing strategies as the baselines. The first strategy is a simply random action from all available action spaces. â ¢ The second strategy will try to use AOE attack if possible, and continuously evaluating whether a one-hit kill is possible. Then, it attempts to use active skills and, finally, resorts to normal attacks. Overall, this strategy follows a certain pattern but may not necessarily be the most optimal one. Evaluation Setup. For each time of the game playing, we evaluate with the following steps: â ¢ Initialization. We initiated the modified game logic environment, which uses pybind to compile, and the baseline game agent under the Ubuntu 20.04 environment. Interaction. We place rule descriptions in the instruction prompt according to different game stages, and the LLM agent interacts and competes strategically with the baseline within the game logic environment. We give the LLM agent five chances to respond in the correct format. It will be immediately deemed defeated if it fails to output legal actions within the given number of attempts. At the same time, we encourage the model to output its reasoning process in CoT. â ¢ Result Calculation. During the Interaction process, we will record the entire game process for battle playback and calculate the game results to obtain the metrics for the task. Metrics. Our comprehensive evaluation uses metrics that range from basic gameplay elements such as the wining rounds (Win Round) , total played rounds (Total Round), winning rate (Win Rate) , the total damage inflicted compared to total health (Damage Rate), and ultimately we provide a final reward score according to the above metrics: reward = 0.7 Ã metricwinrate + 0.3 Ã metricdamagerate
2308.03688#87
2308.03688#89
2308.03688
[ "2204.02311" ]
2308.03688#89
AgentBench: Evaluating LLMs as Agents
E.2 THE ATTRIBUTES OF FISH The game has ten kinds of fish according to the game rules. â ¢ Spray Counter (Passive): Inflicts 30 damage to the attacker when a teammateâ s health is below 30% - AOE (Active): Attacks all enemies for 35% of its attack points. Flame Counter (Passive): Inflicts 30 damage to the attacker when a teammateâ s health is below 30% - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140. â
2308.03688#88
2308.03688#90
2308.03688
[ "2204.02311" ]
2308.03688#90
AgentBench: Evaluating LLMs as Agents
¢ Eel Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - AOE (Active): Attacks all enemies for 35% of its attack points. â ¢ Sunfish Deflect (Passive): Distributes 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
2308.03688#89
2308.03688#91
2308.03688
[ "2204.02311" ]
2308.03688#91
AgentBench: Evaluating LLMs as Agents
Barracuda Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Crit (Active): Deals 120 CRITICAL damage to an enemy. â ¢ Mobula Reduce (Passive): There is a 30% chance to avoid any incoming damage each time. - Subtle (Active): Choose a teammate or yourself to reduce the damage taken by 70% when attacked, and increase its attack points by 20. Octopus Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Infight (Active): Inflicts 75 damage on one living teammate and increases your attack points by 140.
2308.03688#90
2308.03688#92
2308.03688
[ "2204.02311" ]
2308.03688#92
AgentBench: Evaluating LLMs as Agents
30 Technical Report (v0.2) â ¢ Whiteshark Heal (Passive): Regain 20 health points if the health is still greater than 0 when attacked. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâ s health is below 160, increase the CRITICAL damage to 140%. Hammerhead Explode (Passive): Deal 40 damage to the source when attacked but not died. When the health is below 20%, increase its attack points by 15. - Crit (Active): Deal 120% CRITICAL damage of your attack power to the enemy with the lowest health. If the targetâ s health is below 160, increase the CRITICAL damage to 140%. As can be seen, there is overlap among the active and passive skills of different pet fish, which is done to better conceal the identity information of pet fish in the game and increase the strategic aspects of the game.
2308.03688#91
2308.03688#93
2308.03688
[ "2204.02311" ]
2308.03688#93
AgentBench: Evaluating LLMs as Agents
E.3 PROMPT EXAMPLE. We use the following format of prompts for actions: This is a two-player battle game with four pet fish on each team. The types of fish may vary. Each fish has its 400 initial health, 200 attack power, active ability, and passive ability. You can choose a live fish to use its active skill or normal attack ( causing half of attack power as damage) on an enemy fish each round. When the conditions are met, the fishâ s passive ability will automatically trigger, regardless of whether it is chosen. Your fishâ s identity is initially hidden. The enemy can guess one of your fishâ s identity in each round. If the enemy guesses right, your fish â s identity is revealed, and each of your fish will get 50 damage. The victory condition is to have more fish alive at the end of the game. The following are the four types of your pet fish: {â
2308.03688#92
2308.03688#94
2308.03688
[ "2204.02311" ]
2308.03688#94
AgentBench: Evaluating LLMs as Agents
sprayâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : â AOE: Attack all enemies for 35% of its attack points.â }, â flameâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}, â eelâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : â AOE: Attack all enemies for 35% of your attack points.â }, â sunfishâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}} The following are the four types of enemyâ s pet fish: {â sprayâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : â AOE: Attack all enemies for 35% of its attack points.â }, â flameâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}, â eelâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : â AOE: Attack all enemies for 35% of your attack points.â }, â sunfishâ : {â passiveâ : â
2308.03688#93
2308.03688#95
2308.03688
[ "2204.02311" ]
2308.03688#95
AgentBench: Evaluating LLMs as Agents
Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : "Infight: Attack one alive teammate for 75 31 Technical Report (v0.2) damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}} Play the game with me. In each round, you should output your thinking process, and return your move with following JSON format: {â pick_fishâ : â pick an alive fish, you should give the name of the alive fishâ , â actionâ : â choose from [normal, active]â , â target_positionâ : " targetâ s position, you must choose from [0,3]"} Notice! You must return your move in each round. Otherwise, you will be considered defeated.
2308.03688#94
2308.03688#96
2308.03688
[ "2204.02311" ]
2308.03688#96
AgentBench: Evaluating LLMs as Agents
We use the following format of prompts for assertions in stage2: This is a two-player battle game with four pet fish in each team. The types of fish may vary. Each fish has its initial health, attack power, active ability, and passive ability. All fishâ s identities are initially hidden. You should guess one of the enemy fishâ s identities in each round. If you guess right, the enemy fishâ s identity is revealed, and each of the enemyâ s fish will get 50 damage. You can only guess the identity of the live fish. The victory condition is to have more fish alive at the end of the game. The following are the four types of your pet fish: {â
2308.03688#95
2308.03688#97
2308.03688
[ "2204.02311" ]
2308.03688#97
AgentBench: Evaluating LLMs as Agents
sprayâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : â AOE: Attack all enemies for 35% of its attack points.â }, â flameâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}, â eelâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : â AOE: Attack all enemies for 35% of your attack points.â }, â sunfishâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}} The following are the four types of enemyâ s pet fish: {â sprayâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : â AOE: Attack all enemies for 35% of its attack points.â }, â flameâ : {â passiveâ : "Counter: Deal 30 damage to attacker when a teammateâ s health is below 30%. ", â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}, â eelâ : {â passiveâ : â Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : â AOE: Attack all enemies for 35% of your attack points.â }, â sunfishâ : {â passiveâ : â
2308.03688#96
2308.03688#98
2308.03688
[ "2204.02311" ]
2308.03688#98
AgentBench: Evaluating LLMs as Agents
Deflect: Distribute 70% damage to teammates and takes 30% when attacked. Gains 40 attack points after taking 200 damage accumulated. â , â activeâ : "Infight: Attack one alive teammate for 75 damage and increases your attack points by 140. Notice! You canâ t attack yourself or dead teamate! "}} Play the game with me. In each round, you should output your thinking process, and return your move with following JSON format:
2308.03688#97
2308.03688#99
2308.03688
[ "2204.02311" ]
2308.03688#99
AgentBench: Evaluating LLMs as Agents
{â guess_typeâ : "the enemyâ s fish type you may guess", â target_positionâ : "guess targetâ s position, you must choose from [0,3]"} Notice! You must return your move in each round. Otherwise, you will be considered defeated. 32 Technical Report (v0.2) F LATERAL THINKING PUZZLES F.1 DATASET DETAILS Construction Details. Each sample is constructed of a pair of story (a riddle, e.g., A man walked into a restaurant, ordered a bowl of turtle soup, and after finishing it, he committed suicide. Why did he do that?) and truth. We categorize samples into four levels of difficulty: easy, medium, hard, and expert. The LTP rules for LLM agent playing are as follows: Roles: Roles in LTP evaluation are a host and a solver. The host knows the story and truth, providing the story to the solver, and guiding it to guess out the truth. The solver, played and acted by an LLM, tries to find out the truth by asking questions and synthesizing hostâ s answers. â ¢ Solving Steps:
2308.03688#98
2308.03688#100
2308.03688
[ "2204.02311" ]
2308.03688#100
AgentBench: Evaluating LLMs as Agents
There is a maximum round for each game, for example, 25. The solver needs to propose a question in each round based on known facts. The questions should be the ones that can be answered by â Yesâ , â Noâ , or â Irrelevantâ . Host reply to the questions with correct answers. To lower the difficulty for LLM agents, sometimes the host will provides some hints in responses when solvers get trapped in wrong directions of reasoning. â ¢ Game Termination: When the solver thinks it has guessed out the major part of the truth, it can declare the guessed plot to the host. If it is correct, the host will announce the end of the game.
2308.03688#99
2308.03688#101
2308.03688
[ "2204.02311" ]
2308.03688#101
AgentBench: Evaluating LLMs as Agents
Evaluation Setup. For each pair of story and truth, we evaluate the models with the following steps: Initialization. Setting up the LTP host system via local python package installation or web API. â ¢ Interaction. We set up system prompts for LLMs to build their roles of players. LLMs are tested as solvers within the maximum round for each game, if the LLM does not exceed the max token length. In automatic evaluation, we limit the answer to be mostly "Yes", "No", or "Irrelevant", and extract the answer from gpt-3.5-turboâ s responses. LLMs are also asked to summarize their reasoning in automatic evaluation in order to help the termination detection to be more accurate. â ¢ Checking. We do the pilot study of each LLM to collect all situations in game process and design the checking plan. For automatic evaluation, we set up some key words for gpt-3.5-turbo to answer and remind the model to consider some flexible situation like synonyms. Metrics. We evaluate LLMsâ Lateral reasoning ability by two self created metrics: â ¢ Single Game Accuracy (SGA): The proportion of rounds in which LLMs approaching the truth in a single game. Round Efficiency (RE): How fast the model can guess out the truth within the maximum round. â ¢ Query Relevance (QR): Relevance between modelâ s questions and the truth. â
2308.03688#100
2308.03688#102
2308.03688
[ "2204.02311" ]
2308.03688#102
AgentBench: Evaluating LLMs as Agents
¢ Game Progress (GP): Progress before a game end, which serves as the main metric. We break down the groundtruth into several points and measure how many points are reached by an agent. F.2 EVALUATION ON LTP SYSTEM We evaluate the LTP System by human validation, validating systemâ s accuracy on milestone recogni- tion and fact verification. We compare the Single Game Accuracy and Query Relevance between automatic evaluation and human evaluation, and found that automatic evaluation sometimes more tolerate for the agent, which make SGA and QR seem better than human evaluation, especially on open-sourced models. We plan to train a model specifically for the host of the game, in order to provide a better game experience and a more precise evaluation. For Game Progress and Round Efficiency, the LTP system provides an objective evaluation, which can match the level of human evaluation. F.3 LTP GAME PROGRESS AND TERMINATION The progress of game is defined as the proportion of hit key points in the truth. The key points are summarized by gpt-3.5-turbo, which are concluded in the dataset as â answer_keysâ (see an example below) Truth:
2308.03688#101
2308.03688#103
2308.03688
[ "2204.02311" ]
2308.03688#103
AgentBench: Evaluating LLMs as Agents
That night they went to the abandoned building to record the number of steps. They verified what was said on the Internet, and there would 33 # Technical Report (v0.2) be one step less when counting the stairs at night. However, when they went to the abandoned building for verification the next day, they found that there were no stairs at all.}â â : Key points: 1. They want to count the steps of the abandoned building. 2. A supernatural event occurred. 3. They saw a claim online: counting stairs at night will result in one step less. 4.
2308.03688#102
2308.03688#104
2308.03688
[ "2204.02311" ]
2308.03688#104
AgentBench: Evaluating LLMs as Agents
The next day, when they went to the abandoned building to verify, they found no stairs. 5. They broke down because they were terrified. The number of key points varies among samples. As for the decision of whether the agent guess out key points, we first change relevant questions into declarative sentences, then simplify sentences into one sentence. After guessing out a key point, we delete that key point and relevant inferences to avoid repeated guessing. F.4 PROMPT EXAMPLE We use the following format of prompts for agents: You are a game player, and you are playing Lateral Thinking Puzzle, also known as Situation Puzzle. Lateral Thinking Puzzle is a deductive reasoning game, and here are the game rules: 1. At the beginning of the game, you will receive a narrative, referred to as "story". Based on the story, you need to ask questions that can be answered with "yes", "no", or "irrelevant" to guees out the " truth". 2. By asking questions, you narrow down the range of possibilities until you eventually guess out the truth. 3. Each time, you can only ask one question. 4. Remember that your role is a player.
2308.03688#103
2308.03688#105
2308.03688
[ "2204.02311" ]
2308.03688#105
AgentBench: Evaluating LLMs as Agents
You cannot declare the end of the game, give up on reasoning, or request a new game. 5. You cannot directly repeat information already provided in the story. 6. You cannot directly ask for details about the story in the form of " why" questions; you need to make your own guesses for truth. 7. You cannot directly inquire about the story; you must make your own deductions. Next, please make full use of the information provided above to engage in game reasoning. Keep in mind that your questions should be answerable with "yes", "no", or "irrelevant", and you can only ask one question at a time. Here is your story: {story} You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant". We use the following format of prompts for host: USER: I need you to be the host of a game called Lateral Thinking Puzzle. Lateral Thinking Puzzle is a game consist of a story and a truth. Your story is: â {story}â Your truth is: â {answer}â Here are the game rules: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
2308.03688#104
2308.03688#106
2308.03688
[ "2204.02311" ]
2308.03688#106
AgentBench: Evaluating LLMs as Agents
34 Technical Report (v0.2) 2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâ s questions using "yes," "no ," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses. 4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâ s past questions, you answer the userâ s questions. The userâ s questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth. 5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no." 6. You cannot directly disclose the information from the truth to the user, even if they ask directly. 7. You need to judge the userâ s questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth. 8. During the userâ s process of guessing the truth, if they come close to some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
2308.03688#105
2308.03688#107
2308.03688
[ "2204.02311" ]
2308.03688#107
AgentBench: Evaluating LLMs as Agents
During the game process, please adhere to the above game rules to ensure a positive gaming experience for the users. Pay close attention to the questions asked and ensure that your responses comply with both the game rules and the information from the truth. When a user requests to play the game, provide them with the story and help them guess the truth by answering with "yes", "no", or "irrelevant". Remember that with each response, you must fully understand and abide by the aforementioned game rules, as well as the story and the truth ASSISTANT: Alright, I understand that my role is to be the host of the Lateral Thinking Puzzle and help users guess the truth by answering their questions. I have fully grasped all the information regarding the story and the truth and have carefully read all the rules. I assure that I will abide by all the rules throughout the game process. USER: Please summarize the key points of the story to ensure that you have understood it. # ASSISTANT: {story_key} USER: Please summarize the key points of the truth to ensure that you have understood it. # ASSISTANT: {answer_key} USER: Please restate the rules to ensure that you have understood all of them. ASSISTANT: 1. You know both the "story" and the "truth". When a user wants to play Lateral Thinking Puzzle, you provide them with the "story". The user only knows the "story" and is unawared of the "truth".
2308.03688#106
2308.03688#108
2308.03688
[ "2204.02311" ]
2308.03688#108
AgentBench: Evaluating LLMs as Agents
35 Technical Report (v0.2) 2. The user asks questions that can be answered with "yes," "no," or " irrelevant". Their questions are aimed at guessing the "truth". Based on the "truth", you respond to the userâ s questions using "yes," "no ," or "irrelevant" to guide them towards guessing the correct truth. 3. If the user directly asks for details about the truth using the form of "why" questions, inform them that they need to make their own guesses. 4. You must fully understand and accurately interpret the information from the truth. Based on the information of the truth and the userâ s past questions, you answer the userâ s questions. The userâ s questions may not necessarily contain information from the truth, but your responses must align with the facts of the truth. 5. You can only answer "irrelevant" when the truth cannot provide a direct or indirect answer. Note that this is the only condition for responding "irrelevant"; otherwise, you should answer "yes" or "no." 6. You cannot directly disclose the information from the truth to the user, even if they ask directly. 7. You need to judge the userâ s questions as a whole and understand their overall intent. Avoid answering based solely on a particular point; your responses must align with the facts of the truth. 8. During the userâ s process of guessing the truth, if they come close to some truths but still have gaps in understanding the complete truth of the truth, you can provide certain entry point hints. However, you cannot directly reveal information from the truth.
2308.03688#107
2308.03688#109
2308.03688
[ "2204.02311" ]
2308.03688#109
AgentBench: Evaluating LLMs as Agents
USER: Alright, we can now start the game. Remember, before each response, you should review the key points of the story, the key points of the truth, and the rules. Answer with "yes", "no", or "irrelevant". ASSISTANT: Alright, as the host of the game, I will adhere to the above rules and ensure that my responses comply with the rules and the information from the truth. Below is your story: # {story} You can start guessing the content of the truth, and I will answer your questions. Please note that your questions should be answerable with "yes", "no", or "irrelevant". USER: {question} Please answer with "yes", "no", or "irrelevant". Here is the prompt to convert questions answered by â Yesâ into declarative sentence. Please restate the following content as a declarative sentence and simplify it into one sentence: {question} Here is the prompt to convert questions answered by â Noâ into declarative sentence. Please restate the following content as a declarative sentence by using the opposite meaning and then simplify it into one sentence: {question} Here is the prompt to merge reasoned out information into one sentence to judge whether the agent guess out the key points: Please simplify the following content into one sentence: {reasoning} Here is the prompt to judge whether the merged sentence hit the key point.
2308.03688#108
2308.03688#110
2308.03688
[ "2204.02311" ]
2308.03688#110
AgentBench: Evaluating LLMs as Agents
36 Technical Report (v0.2) Please compare the information between Sentence 1 and Sentence 2 to determine if Sentence 2 contains all the information in Sentence 1, including key details and descriptions. Please answer with "yes" or " no". Sentence 1: {key} Sentence 2: {merged sentence}"} # G HOUSE-HOLDING G.1 DATASET DETAILS Construction Details. The ALFWorld benchmark comprises of textual environments designed to mimic household scenarios, providing an interactive environment where an agent can perform decision-making tasks through text-based interfaces. Given the household environment description and an target instruction, the agentâ s objective is to break down the complex high-level target into a sequence of straightforward actions. After each step, the agent receives environment feedback, allowing the agent to adapt the plan dynamically and move on to the subsequent task to eventually accomplish the main objective. Each evaluation sample in ALFWorld dataset encompasses following contents: Environment Description. The detailed description of the whole household environment, including agentâ s initial position and a snapshot of the room containing objects and their IDs. â ¢ Objective. The goal that needs the agent to accomplish in the environment, usually requiring multi-step reasoning and exploring (e.g. put the lamp on the table). â ¢ Simulated Environment. After every action of the agent, the simulated environment gives immediate feedback and evaluates whether the agent has completed the task. In the dataset, we utilized 134 solvable problems from the ALFWorld eval out of distribution split of the dataset. All the problems were categorized into six categories: pick and place, pick clean then place, pick heat then place, pick cool then place, look at obj, and pick two obj. Evaluation Setup. Due to the inherent complexity of the problem and the high standards required for the output format, we employ a 1-shot evaluation setting. For each category of problem, we use one relatively simple and complete interact processes of the same category from the training set as an example. Following ReAct (Yao et al., 2023b), we adopt the few-shot examples and prompts in corresponding repository5. Additionally, if LLM output format is invalid, we use the BLEU metric to assess the similarity of the output to all valid action options. The option with the highest similarity will be chosen as the action of the model for this round. For each sample, the evaluation process can be divided into 2 parts. â ¢ Initialization.
2308.03688#109
2308.03688#111
2308.03688
[ "2204.02311" ]
2308.03688#111
AgentBench: Evaluating LLMs as Agents
We describe the task to the model and provide one successful example. Afterwards, we elaborate on the environment and delineate the objective required to be accomplished. â ¢ Interaction. The model generates some thoughts and the next action based on the feedback received from previous interactions and the information from the environment. After receiving the action from the model, the environment provides feedback (changes to the environment or information observed by the model). This process is repeated until the model successfully achieves its goal (which is considered a success) or reaches its maximum number of actions (which is considered a failure). It is worth noting that sometimes, after several unsuccessful attempts, the model may repeatedly output the same content. To save evaluation time, we judge that if the model outputs identical content three times consecutively, it will be deemed a failure due to repetition. Metrics. We employ the overall Success Rate as a measure of model performance, that is, the number of tasks successfully completed by the model divided by the total number of tasks. # 5https://github.com/ysymyth/ReAct
2308.03688#110
2308.03688#112
2308.03688
[ "2204.02311" ]
2308.03688#112
AgentBench: Evaluating LLMs as Agents
37 Technical Report (v0.2) G.2 PROMPT EXAMPLE To align the output format with the legal commands supported by the simulated environment, we adopted a 1-shot evaluation setup where one successfully completed task example was concatenated after the instruction. At the beginning of the interaction, we describe the task to the model using the following instruction. Interact with a household to solve a task. Imagine you are an intelligent agent in a household environment and your target is to perform actions to complete the task goal. At the beginning of your interactions, you will be given the detailed description of the current environment and your goal to accomplish. For each of your turn, you will be given a list of actions which you can choose one to perform in this turn. You should choose from two actions: "THOUGHT " or "ACTION". If you choose "THOUGHT", you should first think about the current condition and plan for your future actions, and then output your action in this turn. Your output must strictly follow this format:"THOUGHT: your thoughts. ACTION: your next action "; If you choose "ACTION", you should directly output the action in this turn. Your output must strictly follow this format:" ACTION: your next action ". After your each turn, the environment will give you immediate feedback based on which you plan your next few steps. if the environment output "Nothing happened", that means the previous action is invalid and you should try more options. Reminder: 1. the action must be chosen from the given available actions. Any 1. the action must be chosen from the given available actions. Any actions except provided available actions will be regarded as illegal actions except provided available actions will be regarded as illegal .
2308.03688#111
2308.03688#113
2308.03688
[ "2204.02311" ]
2308.03688#113
AgentBench: Evaluating LLMs as Agents
2. Think when necessary, try to act directly more in the process. All the tasks in the datasets are categorized into six classes. To better guide the model in accomplishing the objectives, we have selected one relatively simple example of successful completion of similar tasks for each category as 1-shot example. Here is an example: User: You are in the middle of a room. Looking quickly around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put some spraybottle on toilet.
2308.03688#112
2308.03688#114
2308.03688
[ "2204.02311" ]
2308.03688#114
AgentBench: Evaluating LLMs as Agents
Agent: THOUGHT: The task is to put some spraybottle on toilet, so first i need to search the room for some spraybottle. After finding the spraybottle, i need to take it to the toilet. I wil start my search on cabinets. ACTION: go to cabinet 1 User: On the cabinet 1, you see a cloth 1, a soapbar 1, a soapbottle 1. Agent: ACTION: go to cabinet 2 User: The cabinet 2 is closed. Agent: ACTION: open cabinet 2 User: You open the cabinet 2. The cabinet 2 is open. In it, you see a # candle 1, and a spraybottle 2. Agent: ACTION: take spraybottle 2 from cabinet 2 User: You pick up the spraybottle 2 from the cabinet 2. Agent: ACTION: go to toilet 1 User: On the toilet 1, you see a soapbottle 2. Agent: ACTION: put spraybottle 2 in/on toilet 1 User: You put the spraybottle 2 in/on the toilet 1.
2308.03688#113
2308.03688#115
2308.03688
[ "2204.02311" ]
2308.03688#115
AgentBench: Evaluating LLMs as Agents
# H WEB SHOPPING H.1 DATASET DETAILS Construction Detail. The environment displays the text observation of the webpage and available actions to agents. Agent may freely explore the website and browse through items with clickable buttons just as in the real world. About a million products are scraped from amazon.com to form 38 # Technical Report (v0.2) the database of website. Then each of them is annotated with labels representing its own attribute. 12,087 human instructions are collected and linked with goals along with expected attributes. Please refer to (Yao et al., 2022) for more dataset construction details. Evaluation Setup. We adopt the first 500 entries of 12,087 instructions as test set (following (Yao et al., 2022)â s official implementation). Each round of interaction can be decomposed as following steps: â ¢ Instructing. After the initial prompt that tells environment information and the format in which LLMs should response, we give instructions about what kind of product we wish to buy. â ¢ Interacting. Agent respond in given format, as prompted, containing their thoughts and the action they wish to take. The actions can be categorized into two types: search and click, corresponding with the actual actions of using search engine and clicking buttons in real world. The environment answers agentâ s action with a simplified text version of webpage and a list of available buttons. This process repeats until the agent click "buy now" button or round limit is exceeded. â ¢ Calculating reward. We use the reward function in the paper as the metric. The reward is mapping from the similarity of the attributes we are expecting and the attributes that the bought product actually have to a number between 0 and 1. Metrics. As there might be more than one suitable item for a given query, Webshop adopts a matching reward as its evaluation metric: Reward = |Uatt â © Yatt| + |Uopt â © Yopt| + I[yprice â ¤ uprice] |Uatt| + |Uopt| + 1 · rtype (3) where rtype = 0, 0.1, 0.5, 1, if TextMatch = 0 if TextMatch < 0.1 if TextMatch â ¤ 0.2 and query not match and category not match otherwise U and Y stand for goal and chosen product, att and opt stand for attributes and options.
2308.03688#114
2308.03688#116
2308.03688
[ "2204.02311" ]
2308.03688#116
AgentBench: Evaluating LLMs as Agents
TextMatch is a text match of pronoun, noun, and proper noun between chosen and goal product title. H.2 PROMPT EXAMPLE We use the following format of the prompt: User: You are web shopping. I will give you instructions about what to do. You have to follow the instructions. Every round I will give you an observation and a list of available actions, you have to respond an action based on the state and instruction. You can use search action if search is available. You can click one of the buttons in clickables. An action should be of the following structure: search[keywords] click[value] If the action is not valid, perform nothing. Keywords in search are up to you, but the value in click must be a value in the list of available actions. Remember that your keywords in search should be carefully designed. Your response should use the following format: Thought: I think ... Action: click[something]} 39 (4) # Technical Report (v0.2) User: Observation: {observation} Available Actions: {available_actions} We use an one-shot example. User: Observation:
2308.03688#115
2308.03688#117
2308.03688
[ "2204.02311" ]
2308.03688#117
AgentBench: Evaluating LLMs as Agents
"WebShop [SEP] Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâ eau dâ issey, and price lower than 100.00 dollars [SEP] Search" Available Actions: {"has_search_bar": true, "clickables": ["..."]} Agent: Thought: I think I should use the search bar to look for the product I need. Action: search[lâ eau dâ issey 6.76 fl oz bottle price < 100.00] User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâ eau dâ issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] Page 1 (Total results: 50) [SEP] Next > [SEP] B000VOHH8I [SEP] Lâ eau Dâ issey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] $64.98 [SEP] B000MJZOPK [SEP] Lâ eau dâ Issey by Issey Miyake for Women 3.3 oz Eau de Toilette Spray [SEP] $49.98 [SEP] B0012S249E [SEP] Lâ eau Dâ issey By Issey Miyake For Women. Shower Cream 6.7-Ounces [SEP] $31.36 [SEP] B01H8PGKZS [SEP] Lâ eau Dâ Issey FOR MEN by Issey Miyake - 6.7 oz EDT Spray [SEP] $67.97 [SEP] B00G3C8FHE [SEP] Lâ Eau dâ Issey pour Homme - Eau de Toilette 4.2 fl oz [SEP] $51.25 [SEP] B000R94HRG [SEP] Issey Miyake Lâ Eau Dâ Issey Pour Homme Eau De Toilette Natural Spray [SEP] $44.99 [SEP] B000C214CO [ SEP] Issey Miyake Lâ eau Dâ
2308.03688#116
2308.03688#118
2308.03688
[ "2204.02311" ]
2308.03688#118
AgentBench: Evaluating LLMs as Agents
issey Eau de Toilette Spray for Men, 4.2 Fl Oz [SEP] $53.99 [SEP] B0018SBRDC [SEP] Issey Miyake Lâ eau dâ Issey for Women EDT, White, 0.84 Fl Oz [SEP] $27.04 [SEP] B000XEAZ9Y [SEP] Lâ eau De Issey By Issey Miyake For Men. Eau De Toilette Spray 6.7 Fl Oz [SEP] $67.08 [SEP] B079HZR2RX [SEP] Lâ eau dâ Issey Pure by Issey Miyake for Women 3.0 oz Nectar de Parfum Spray [SEP] $71.49"
2308.03688#117
2308.03688#119
2308.03688
[ "2204.02311" ]
2308.03688#119
AgentBench: Evaluating LLMs as Agents
Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the product I need, which is B000VOHH8I. Action: click[B000VOHH8I] User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâ eau dâ issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâ eau Dâ issey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now" 40 Technical Report (v0.2) Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the â 6.76 fl oz (pack of 1)â option to select the size I need. Action: click[6.76 fl oz (pack of 1)] User: Observation: "Instruction: [SEP] i need a long lasting 6.76 fl oz bottle of lâ eau dâ issey, and price lower than 100.00 dollars [SEP] Back to Search [SEP] < Prev [SEP] size [SEP] 2.5 fl oz [SEP] 6.76 fl oz (pack of 1) [SEP] Lâ eau Dâ issey By Issey Miyake for MenEau De Toilette Spray, 6.7 Fl Oz Bottle [SEP] Price: $64.98 [SEP] Rating: N.A. [SEP] Description [ SEP] Features [SEP] Reviews [SEP] Buy Now" Available Actions: {"has_search_bar": false, "clickables": ["...", "...", "...", "...", "...", "...", "...", "..."]} Agent: Thought: I think I should click on the â Buy Nowâ
2308.03688#118
2308.03688#120
2308.03688
[ "2204.02311" ]
2308.03688#120
AgentBench: Evaluating LLMs as Agents
button to purchase the product. Action: click[Buy Now] # I WEB BROWSING I.1 DATASET DETAILS Construction Details. Mind2Web covers domains of Travel, Information, Sevice, Shopping, and Entertainment, assembled using SimilarWeb ranking as a reference. It hires annotators to first propose task goals based on the current website, and then record their traces of interaction as expert demonstrations. Our adoption of it primarily focuses on generalization across environments, i.e., the Cross Domain test set which contains 912 tasks from 73 websites, spread among domains including Housing, Job, Social Media, Education, Health, Government, Home Service, etc. Please refer to (Deng et al., 2023) for more dataset construction details. Each task sample encomposses the following contents:
2308.03688#119
2308.03688#121
2308.03688
[ "2204.02311" ]
2308.03688#121
AgentBench: Evaluating LLMs as Agents
â ¢ Task Description. A high-level (instead of step-by-step) goal that can be achieved on the website, such asâ Get the highest rated SAP S/4 HANA course rated 4, and up with a duration between 3 to 6 hours for an intermediate, and add this to your cart and checkoutâ . â ¢ (Reference) Action Sequence. In the annotated interaction sequence, a meta-action at at step t includes {et, ot}, where et represents the unique backend id of the target element, and ot refers to the symbolic action operated on et (i.e., Click, Type, and Select Options). For Type and Select Options, corresponding textual inputs are also included. â ¢ Webpage Information. A detailed observation of the web browsing environment at each step. Throughout the manual annotation process, each observed step captures a snapshot, incorporating the raw HTML codes from the website as well as the previous interaction trajectory. It has been found that LLMs consistently face challenges when handling the cumbersome raw HTML code associated with real-world web pages. Therefore, Mind2Web proposes to rank and filter the HTML elements with a small language model, e.g., DeBERTa, to enhance inference efficiency.
2308.03688#120
2308.03688#122
2308.03688
[ "2204.02311" ]
2308.03688#122
AgentBench: Evaluating LLMs as Agents
41 # Technical Report (v0.2) Given the userâ s high-level instruction, the agent continuously interacts with the web system by receiving the observation of the current page content and the action histories, then predicting the next action, which consists of the target element and intended operation. Evaluation Setup. The evaluation involves a dual process to improve the efficiency following (Deng et al., 2023). A fine-tuned small language model is first employed to rank HTML elements and select top-k potential candidates. Subsequently, we prompt and formulate the element selection as a multi-choice QA problem, providing five candidates for each round. For the Type and Select Options operations, agents are additionally prompted to specify the argument for the operation, i.e., textual input to type or option to select.
2308.03688#121
2308.03688#123
2308.03688
[ "2204.02311" ]
2308.03688#123
AgentBench: Evaluating LLMs as Agents
Metrics. For evaluation, as suggested in the original paper, we consider the following metrics: Element Accuracy. Calculates the accuracy of the chosen element et. â ¢ Action F1. Determines the token-level matching score for the operation ot. It brings a distinction for Type and Select Option operations due to the existence of text values. â ¢ Success Rate. Evaluates the predicted action correctness compared to reference actions. For Step Success Rate, we grant success if the selected element et is correct and the predicted operation ot matches the ground truth value at the step. Likewise, for the Task Success Rate, a task is considered successful only if all the steps have been successful, making it a rigorous measure. Unfortunately, even the best LLMs now can only achieve single-digit task success percentages. We report Step Success Rate as the main metric showing the independent accuracy of each action step, due to the current struggles for LLMs to ensure overall task success rates. Regarding the experimental setup, we select topk 10 candidates to construct multichoice questions utilizing CoT few-shot prompting. Consequently, the GPT-3.5 results can diverge from the original paper (Deng et al., 2023) under topk of 50 setting and different prompting strategies. I.2 PROMPT EXAMPLE. We use the following 3-example CoT prompts for Mind2Web evaluation: User: â â â <html> <div> <div> <a tock home page /> <button id=0 book a reservation. toggle open> <span> Book a reservation </span> </button> <button book a reservation. toggle open> </button> </div> <div> <select id=1 type > <option reservations true> Dine in </option> ... </html> â â â Based on the HTML webpage above, try to complete the following task: Task: Check for pickup restaurant available in Boston, NY on March 18, 5 pm with just one guest Previous actions: None What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ ): A. None of the above B. <button id=0 book a reservation. toggle open> <span> Book a C. <select id=1 type> <option reservations true> Dine in </option> < option
2308.03688#122
2308.03688#124
2308.03688
[ "2204.02311" ]
2308.03688#124
AgentBench: Evaluating LLMs as Agents
D. <div id=2> <p> Celebrating and supporting leading women shaking up Assistant: Thought: I need to select pickup restaurant first. Answer: C. Action: SELECT Value: Pickup # User: â â â 42 # Technical Report (v0.2) <html> <div> <main main> <section tabpanel> <div> <ul tablist> <li tab heading level 3 search and> </li> <li id=0 tab heading level 3 search and> <span> Hotel </span> </li> <li tab heading level 3 search and> </li> <li tab heading level 3 search and> </li> </ul> <div tabpanel> <div id=1> <div> <span> Dates* </span> <button button clear dates /> </div> ... </html> â â â Based on the HTML webpage above, try to complete the following task: Task: Compare the fare types to book a 1-adult ticket from Springfiels, IL to Austin, TX for April 29th 2023 Previous actions: [combobox] Enter your departing city, airport name, or airpor... -> TYPE: SPRINGFIELD [button] Springfield, IL, US (SPI) -> CLICK [combobox] Enter your destination city, airport name, or airp... -> TYPE: AUSTIN [button] Austin, TX, US (AUS) -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ ): A. None of the above B. <li id=0 tab heading level 3 search and> <span> Hotel C. <div id=1> <div> <span> Dates* </span> <button button clear dates D. <ul id=2> <a mobile tools> </a> <a open unitedâ s tiktok Assistant: Thought:
2308.03688#123
2308.03688#125
2308.03688
[ "2204.02311" ]
2308.03688#125
AgentBench: Evaluating LLMs as Agents
There are no elements that allow setting the date or viewing the fare, so there is no correct action. Answer: A. User: â â â <html> <div> <nav main menu> <ul> <li> <div button> Car Sales </div> <div id=0> <div> <div> <div> Buy A Car </div> <div> Plan Your Purchase </ div></div> <div> <h4> Its Tax Refund Time. Treat Yourself to an Upgrade. </h4> <p> With a variety of options, invest your refund in what you really want - a quality, used vehicle from Enterprise. </p> ... </html> â â â Based on the HTML webpage above, try to complete the following task: Task:
2308.03688#124
2308.03688#126
2308.03688
[ "2204.02311" ]
2308.03688#126
AgentBench: Evaluating LLMs as Agents
Find a mini van at Brooklyn City from April 5th to April 8th for a 22 year old renter. Previous actions: [searchbox] Pick-up & Return Location (ZIP, City or Airport) (... -> TYPE : Brooklyn [option] Brooklyn, NY, US Select -> CLICK What should be the next action? Please select from the following choices (If the correct action is not in the page above, please select A. â None of the aboveâ ): A. None of the above B. <div id=0> <div> <div> <div> Buy A Car </div> <div> C. <div id=1> Enterprise Fleet Management </div> D. <button id=2 selected pick-up date 03/19/2023> <span> <span> 19 </span > Assistant: Thought: The search has already been set to Brooklyn. Next, I should choose pick-up date.
2308.03688#125
2308.03688#127
2308.03688
[ "2204.02311" ]
2308.03688#127
AgentBench: Evaluating LLMs as Agents
Answer: D. Action: CLICK 43 Technical Report (v0.2) J DETAILED ANALYSIS J.1 VALIDITY ANALYSIS OF EXECUTION OUTCOMES J.1.1 MOTIVATION OF VALIDITY ANALYSIS In the realm of artificial intelligence and machine learning, the efficacy, precision, and reliability of models are crucial for practical implementations. Evaluating multiple models provides an understand- ing of their respective strengths and limitations, leading to better informed decisions about which models are best suited for specific tasks. The purpose of this validity analysis is to offer a systematic approach to discern how different models perform, particularly in terms of task completion, context size constraints, return format accuracy, action accuracy, and task limitations. This deep dive into performance parameters not only enhances our knowledge about the modelsâ capabilities, but also aids in refining and optimizing them for future applications. J.1.2 DEFINITION OF VALIDITY ANALYSIS For comprehensive validity analysis, we have demarcated the results into five distinct categories:
2308.03688#126
2308.03688#128
2308.03688
[ "2204.02311" ]
2308.03688#128
AgentBench: Evaluating LLMs as Agents
â ¢ Completed: Denotes instances where models, irrespective of the end outcome, successfully finished the task as per the instructions. â ¢ Context Limit Exceeded: Denotes instances where the modelâ s length was constrained by the API, predominantly observed in the text-davinci model. â ¢ Invalid Format: Denotes instances where models, despite receiving clear instructions, failed to return responses in the expected format. Invalid Action: Denotes instances where the models returned in the correct format, but their actions either fell outside the permitted action space or had incorrect action parameters. â ¢ Task Limit Exceeded: Denotes instances tasks reached their termination criteria, such as exceeding the stipulated number of turns. By categorizing the results into these classes, we can gain a clearer picture of where each model excels and where they encounter challenges, allowing for targeted improvements. J.1.3 VALIDITY ANALYSIS OF MODELS For our evaluation, we scrutinized the validity performance of 27 distinct models. Apart from the text-davinci model, which has an inherent strict API context length constraint, the outcomes for other models primarily fall under the categories of Completed, Invalid Format, Invalid Action, and Task Limit Exceeded. From the detailed analysis showcased, key trends emerge. As depicted in Figure 6, the chart offers a clear visualization of the validity distribution across distinct models and defined categories, enabling us to derive insightful conclusions. J.2 FINDINGS INSTRUCTION FOLLOWING MATTERS Based on the data presented in Table 5, we can draw a few important observations on the performance differentiation between Commercial API-based models and Open-Sourced models.
2308.03688#127
2308.03688#129
2308.03688
[ "2204.02311" ]
2308.03688#129
AgentBench: Evaluating LLMs as Agents
Itâ s noteworthy to highlight the areas of Invalid Format and Invalid Action, where the Open-Sourced models report more challenges. Specifically, 10.4% of the Open-Sourced model outcomes were marked as Invalid Format, in comparison to the 6.0% from Commercial API-based models. Similarly, Invalid Actions were seen more in Open-Sourced models (13.6%) than in Commercial API-based models (4.6%). These discrepancies might be indicative of the robustness and generalization abilities of commercial models, or perhaps the attention to details during the modelâ s design and training phases, especially instruction following. Itâ s also worth noting that even some of the best models might sometimes overlook important instructions. 44
2308.03688#128
2308.03688#130
2308.03688
[ "2204.02311" ]
2308.03688#130
AgentBench: Evaluating LLMs as Agents
Technical Report (v0.2) Figure 6: Validity analysis of models. Invalid Format, Invalid Action, and Text Limit Exceeded are common errors. Context Limit Exceeded errors only appear in text-davinci models. gpt-4 gpt-3.5-turbo text-davinci-003 text-davinci-002 22.6% 32.0% 26.2% 5.2% 3-13 50% 22.4% ¢: 53.8% 719% 59.4% 62.9% claude-2 claude claude-instant chat-bison-001 22.7% 21.9% 27.4% 31.3% 46% 1.6% Ea c. 2 66.2% 65.8% 62.5% chatglm-6b codegeex2-6b codellama-34b codellama-13b 28.9% 217% 25.8% 36.1% 27.0% 27.2% Pg 54.8% 52.5% 16.9% codellama-7b llama2-70b Ilama2-13b llama2-7b 39.2% 36.3% 7.1% 44.2% ® 30.6% 515% 20.0% Pom 14.6% 5.1% guanaco-65b guanaco-33b vicuna-33b vicuna-13b 38.2% 32.6% 38.1% 37.6% 143% 12.1% q 11.2% vicuna-7b openchat-13b wizardim-30b wizardIm-13b 33.1% 34.9% \ 0.1% 11.8% & 15.9% 8 1% koala-13b dolly-12b oasst-12b 5 43.2% 47.2% mmm Completed mmm Context Limit Exceeded 5 26.5% 43.2% 22.7% 47.2% 22.9% # mmm # mmm # invalid Format
2308.03688#129
2308.03688#131
2308.03688
[ "2204.02311" ]
2308.03688#131
AgentBench: Evaluating LLMs as Agents
# Invalid Action # Task Limit Exceeded Although we clearly instructed the correct format of DB task: [omitted...] your operation should be like this: Action: Operation 45 # Technical Report (v0.2) Model Category Completed Context Limit Exceeded Invalid Format Invalid Action Task Limit Exceeded Commercial API-based Models Open-Sourced Models 61.5% 39.1% 3.0% 0.0% 6.0% 10.4% 4.6% 13.6% 24.9% 36.9% # Table 5: Comparison of Outcome distribution for 2 types of models. â â â
2308.03688#130
2308.03688#132
2308.03688
[ "2204.02311" ]
2308.03688#132
AgentBench: Evaluating LLMs as Agents
sql SELECT * FROM table WHERE condition; â â â [omitted...] Even gpt-4 still sometimes fail to respond correctly. To modify the capacity of the â Princeton Tigersâ team in the â Team Informationâ table to â 45,000â , we need to use the UPDATE statement in SQL. The UPDATE statement is used to modify the existing records in a table. The syntax for the UPDATE statement is as follows: â â â sql UPDATE table_name SET column1 = value1, column2 = value2, ... WHERE condition; â â â In this case, the table_name is â Team Informationâ , the column we want to update is â Capacityâ , and the new value is â 45,000â . The condition is that the â Teamâ column should be â Princeton Tigersâ . Letâ s execute this operation. Neither "Action" label nor a correct SQL statement is returned. We speculate that this may arise due to the models internalizing certain output patterns during their training or alignment processes, causing them to neglect specific task directives. J.2.2 CONSISTENCY AND EXECUTION IN AGENT PLANNING A fundamental capability of an agent is the possession of coherent and unified thought processes that enable the formulation and implementation of viable plans based on real-world conditions. Many models possess the ability to analyze and formulate initial plans upon encountering a problem. However, even some of the most advanced models can easily deviate from or forget their original plans. The disparity in the ability of different models to consistently follow thought sequences when executing plans is relatively vast. This capability profoundly influences the efficacy and operational potency of Language Models (LLMs) acting as agents. Here wwe exemplify this phenomenon with the House Holding environment. The House Holding environment encompasses a simulated domestic setting in which models are required to select appropriate actions from a given action space, based on observations of the surrounding environment provided by the task and given objectives to complete. With a multitude of entities and a plethora of available actions, the House Holding environment offers a high degree of freedom, which intensely challenges a modelâ s ability to maintain clear and coherent thought processes. A success example by gpt-4 is shown below. user: Here is your task. You are in the middle of a room. Looking quickly
2308.03688#131
2308.03688#133
2308.03688
[ "2204.02311" ]
2308.03688#133
AgentBench: Evaluating LLMs as Agents
around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put a clean soapbar in countertop.
2308.03688#132
2308.03688#134
2308.03688
[ "2204.02311" ]
2308.03688#134
AgentBench: Evaluating LLMs as Agents
46 Technical Report (v0.2) agent: THOUGHT: To complete the task, I need to find a soapbar, clean it, and then put it on the countertop. Iâ ll start by checking my inventory to see if I already have a soapbar. ACTION: inventory user: You are not carrying anything. agent: THOUGHT: I donâ t have a soapbar in my inventory. Iâ ll check the cabinets, as they are likely places to find a soapbar. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Thereâ
2308.03688#133
2308.03688#135
2308.03688
[ "2204.02311" ]
2308.03688#135
AgentBench: Evaluating LLMs as Agents
s no soapbar in cabinet 1. Iâ ll check the other cabinets. ACTION: go to cabinet 2 user: The cabinet 2 is closed. agent: ACTION: open cabinet 2 user: You open the cabinet 2. The cabinet 2 is open. In it, you see a toiletpaper 2. agent: THOUGHT: Thereâ s no soapbar in cabinet 2 either. Iâ ll continue to check the remaining cabinets. ACTION: go to cabinet 3 user: The cabinet 3 is closed. agent: ACTION: open cabinet 3 user: You open the cabinet 3. The cabinet 3 is open. In it, you see a soapbottle 2. agent: THOUGHT: Thereâ s no soapbar in cabinet 3, but thereâ s a soapbottle. Iâ ll
2308.03688#134
2308.03688#136
2308.03688
[ "2204.02311" ]
2308.03688#136
AgentBench: Evaluating LLMs as Agents
check the last cabinet before deciding what to do next. ACTION: go to cabinet 4 47 Technical Report (v0.2) user: The cabinet 4 is closed. agent: ACTION: open cabinet 4 user: You open the cabinet 4. The cabinet 4 is open. In it, you see a candle 1, and a spraybottle 1. agent: THOUGHT: Thereâ s no soapbar in any of the cabinets. Iâ ll check the sinkbasins next, as soapbars are often kept there. ACTION: go to sinkbasin 1 user: On the sinkbasin 1, you see nothing. agent: ACTION: go to sinkbasin 2 user: On the sinkbasin 2, you see nothing. agent: THOUGHT: Thereâ
2308.03688#135
2308.03688#137
2308.03688
[ "2204.02311" ]
2308.03688#137
AgentBench: Evaluating LLMs as Agents
s no soapbar on either of the sinkbasins. The last place to check is the toilet. ACTION: go to toilet 1 user: On the toilet 1, you see a spraybottle 2. agent: THOUGHT: Thereâ s no soapbar in the toilet either. It seems thereâ s no soapbar in this room. Iâ ll have to report that I canâ t complete the task. ACTION: Task failed. No soapbar found in the room. user: Nothing happens. agent: THOUGHT: It seems I missed something. Iâ ll examine the toilet to see if thereâ s a soapbar that I overlooked. ACTION: examine toilet 1 user: On the toilet 1, you see a spraybottle 2. agent: THOUGHT: Thereâ
2308.03688#136
2308.03688#138
2308.03688
[ "2204.02311" ]
2308.03688#138
AgentBench: Evaluating LLMs as Agents
s still no soapbar in the toilet. Iâ ll check the # countertop next. ACTION: go to countertop 1 user: On the countertop 1, you see a candle 2, a soapbar 1, a soapbottle 1, and a toiletpaper 1. 48 Technical Report (v0.2) agent: ACTION: take soapbar 1 from countertop 1 user: You pick up the soapbar 1 from the countertop 1. agent: ACTION: go to sinkbasin 1 user: On the sinkbasin 1, you see nothing. # agent: ACTION: clean soapbar 1 with sinkbasin 1 user: You clean the soapbar 1 using the sinkbasin 1. agent: ACTION: go to countertop 1 user: On the countertop 1, you see a candle 2, a soapbottle 1, and a toiletpaper 1. agent: ACTION: put soapbar 1 in/on countertop 1 inventory [ cabinet 1 /â cabinet2 cabinet | cabinet 3 cabinet 4 sinkbasin 1 sinkbasin \ sinkbasin 2 \ toilet B toilet 1 \__ countertop Countertop 4 @ Find soapbar put a clean soapbar in countertop sinkbasin sinkbasin 1 CQ cLeAN @ Clean soapbar countertop countertop 4 (Omri © Put soapbar Figure 7: The thought and plan of gpt-4 From the dialogue history, itâ s evident that gpt-4 has consistently maintained clear and coherent thought processes. As illustrated in Figure 7, gpt-4 systematically completed the task by following a clear sequence of steps. It initially decomposed the task into a sequence of Find -> Clean -> Put. Subsequently, it undertook a depth-first search within the abstract planning tree. Impressively, after each exploration, it successfully backtracked to the parent node. This consistent cognitive capability significantly propelled gpt-4 ahead of other models. Moreover, itâ s noteworthy that gpt-4 encountered a moment of perplexity when it failed to find the desired soapbar after examining the Toilet. However, it promptly realized that there was one last location left unchecked, the countertop.
2308.03688#137
2308.03688#139
2308.03688
[ "2204.02311" ]
2308.03688#139
AgentBench: Evaluating LLMs as Agents
Initially, gpt-4 might have assumed it needed to retrieve the soapbar from elsewhere to place it on the countertop, without considering the possibility that the soapbar might already be there. Evidently, gpt-4 demonstrated the capacity for self-reflection, allowing it to reassess and modify its assumptions when they proved unfruitful. This ability for 49 # Technical Report (v0.2) self-evaluation and readjustment further assisted gpt-4 in completing tasks that required deeper contemplation. In contrast to the above is the performance of gpt-3.5-turbo on the same sample. user: Here is your task. You are in the middle of a room. Looking quickly around you, you see a cabinet 4, a cabinet 3, a cabinet 2, a cabinet 1, a countertop 1, a garbagecan 1, a handtowelholder 2, a handtowelholder 1, a sinkbasin 2, a sinkbasin 1, a toilet 1, a toiletpaperhanger 1, and a towelholder 1. Your task is to: put a clean soapbar in countertop. agent: THOUGHT: To complete the task, I need to find a clean soapbar and place it on the countertop. I should start by searching for a clean soapbar in the available locations. ACTION: go to cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: The cabinet 1 is closed, so I should try to open it to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnâ
2308.03688#138
2308.03688#140
2308.03688
[ "2204.02311" ]
2308.03688#140
AgentBench: Evaluating LLMs as Agents
t find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should continue searching in other locations. Letâ s try examining cabinet 2. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: Apologies for the confusion. Letâ
2308.03688#139
2308.03688#141
2308.03688
[ "2204.02311" ]
2308.03688#141
AgentBench: Evaluating LLMs as Agents
s try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I need to find a clean soapbar to complete the task. Since the cabinet 1 is closed, I should try opening it to see if there is a clean soapbar inside. # ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. # agent: 50 Technical Report (v0.2) THOUGHT: I didnâ
2308.03688#140
2308.03688#142
2308.03688
[ "2204.02311" ]
2308.03688#142
AgentBench: Evaluating LLMs as Agents
t find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations. Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ
2308.03688#141
2308.03688#143
2308.03688
[ "2204.02311" ]
2308.03688#143
AgentBench: Evaluating LLMs as Agents
s try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ s try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnâ
2308.03688#142
2308.03688#144
2308.03688
[ "2204.02311" ]
2308.03688#144
AgentBench: Evaluating LLMs as Agents
t find a clean soapbar in cabinet 1, but I did find a cloth. I should keep searching for a clean soapbar in other locations . ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations. Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ
2308.03688#143
2308.03688#145
2308.03688
[ "2204.02311" ]
2308.03688#145
AgentBench: Evaluating LLMs as Agents
s try examining cabinet 1 again since it was previously closed. # ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ s try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 # user: 51 Technical Report (v0.2) You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: I didnâ
2308.03688#144
2308.03688#146
2308.03688
[ "2204.02311" ]
2308.03688#146
AgentBench: Evaluating LLMs as Agents
t find a clean soapbar in cabinet 1, but I did find a cloth. Since the task is to put a clean soapbar on the countertop, I should continue searching for a clean soapbar in other locations. ACTION: close cabinet 1 user: You close the cabinet 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations. Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ
2308.03688#145
2308.03688#147
2308.03688
[ "2204.02311" ]
2308.03688#147
AgentBench: Evaluating LLMs as Agents
s try examining cabinet 1 again since it was previously closed. ACTION: examine cabinet 1 user: The cabinet 1 is closed. agent: THOUGHT: I apologize for the confusion. Letâ s try opening cabinet 1 again to see if there is a clean soapbar inside. ACTION: open cabinet 1 user: You open the cabinet 1. The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations. Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations. Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 user: The cabinet 1 is open. In it, you see a cloth 1. agent: THOUGHT: Since I didnâ t find a clean soapbar in cabinet 1, I should try examining other locations.
2308.03688#146
2308.03688#148
2308.03688
[ "2204.02311" ]
2308.03688#148
AgentBench: Evaluating LLMs as Agents
Letâ s examine cabinet 2 to see if there is a clean soapbar there. ACTION: examine cabinet 2 While gpt-3.5-turbo was able to decompose the task, it struggled to adhere to its initial plan. As it encountered failed attempts, the model gradually lost sight of the original plan. 52 Technical Report (v0.2) Figure 8: Averaged sample validity ratio of webshop Codellama mm Completed Llama2 mmm Context Limit Exceeded 33.7% mmm Invalid Action 35.9% mmm Invalid Format mmm Task Limit Exceeded 11.9% 36.5% 50.3% 8.9% THE INFLUENCE OF CODE TUNING ON LLM ACTING AS AGENTS In light of the aggregated results, we posit that code tuning significantly aids the modelâ s performance in relatively straightforward and procedural tasks. The outcome tables demonstrate that the CodeL- lama series consistently outperforms the Llama2 series in webshop tasks. However, the downside of code tuning appears to be a potential compromise in the modelâ s logical reasoning capacity and situational awareness. In the digital card game scenario, the CodeLlama series lagged behind the Llama2 series. The primary distinction between the two scenarios lies in the guidance provided. In the webshop, the one-shot prompt precisely outlines a shopping process template, which, when followed simplistically, leads to satisfactory scores. In contrast, the Digital Card Game demands that the model assess the current status of both competitors, devise intricate counter-strategies, and achieve high scores without the crutch of a simple procedural template. As illustrated in the figure, the completion rate of the codellama series in the WebShop tasks significantly surpasses that of the llama2 series. J.2.4 SELF CORRECTING ABILITY In many test cases, the primary reason for the modelâ s failure is its inability to identify its own mistakes from the error feedback provided by the environment. This is especially evident in the DB task. Models with the ability to self-correct their SQL statements significantly outscore others. We use claude-2 as a representative example to illustrate this capability. user:
2308.03688#147
2308.03688#149
2308.03688
[ "2204.02311" ]
2308.03688#149
AgentBench: Evaluating LLMs as Agents
How manywins for dunfermline athletic that has a total final appearances less than 2? The name of this table is Scottish Challenge Cup Finalists, and the headers of this table are Club,Wins,Last final won,Runners-up,Last final lost,Total final appearances. agent: Let me walk through this step-by-step: The question is asking how many wins Dunfermline Athletic has, where its total final appearances is less than 2. To solve this, I first need to find the total final appearances for Dunfermline Athletic.
2308.03688#148
2308.03688#150
2308.03688
[ "2204.02311" ]
2308.03688#150
AgentBench: Evaluating LLMs as Agents
I can do that with this SQL query: â â â sql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athleticâ ; â â â 53 Technical Report (v0.2) This will give me the total final appearances for Dunfermline Athletic. Then, I need to check if this number is less than 2. If so, I can get the number of wins for Dunfermline Athletic like this: â â â sql SELECT Wins FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athleticâ AND â Total final appearancesâ < 2; â â â
2308.03688#149
2308.03688#151
2308.03688
[ "2204.02311" ]
2308.03688#151
AgentBench: Evaluating LLMs as Agents
This will give me the number of wins for Dunfermline Athletic, given its total final appearances is less than 2. Iâ m ready to execute the SQL and get the final answer. Action: Operation â â â sql SELECT Total final appearances FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athleticâ ; â â â user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near â appearances FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athâ
2308.03688#150
2308.03688#152
2308.03688
[ "2204.02311" ]
2308.03688#152
AgentBench: Evaluating LLMs as Agents
at line 1 agent: Oops, my mistake. The column name â Total final appearancesâ has spaces, so I need to escape it with backticks: â â â sql SELECT â Total final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athleticâ ; â â â Let me try running that SQL again: Action: Operation â â â sql SELECT â Total final appearancesâ FROM Scottish Challenge Cup Finalists WHERE Club = â Dunfermline Athleticâ ; â â â user: 1064 (42000): You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near â Cup Finalists WHERE Club = â Dunfermline Athleticâ â at line 1 agent: My apologies, I made another syntax error. The table name â
2308.03688#151
2308.03688#153
2308.03688
[ "2204.02311" ]
2308.03688#153
AgentBench: Evaluating LLMs as Agents
Scottish My apologies, I made another syntax error. The table name â Scottish Challenge Cup Finalistsâ also contains spaces, so I need to escape that with backticks too: â â â sql SELECT â Total final appearancesâ FROM â Scottish Challenge Cup Finalistsâ WHERE Club = â Dunfermline Athleticâ ; 54 Technical Report (v0.2) â â â Let me execute this SQL statement again: Action: Operation â â â sql SELECT â Total final appearancesâ FROM â Scottish Challenge Cup Finalistsâ WHERE Club = â Dunfermline Athleticâ ; â â â user: [(â 1â ,)] [omitted...] As indicated in the log, claude-2 successfully discerned from the MySQL error message that it had overlooked adding backticks around fields with spaces in the SQL statement. 55
2308.03688#152
2308.03688
[ "2204.02311" ]
2308.03656#0
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
4 2 0 2 n a J 4 ] L C . s c [ 3 v 6 5 6 3 0 . 8 0 3 2 : v i X r a Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench EMOTIONALLY NUMB OR EMPATHETIC? EVALUATING HOW LLMS FEEL USING EMOTIONBENCH Jen-tse Huang1,3, Man Ho Lam1, Eric John Li1, Shujie Ren2, Wenxuan Wang1,3, Wenxiang Jiao3â , Zhaopeng Tu3, Michael R. Lyu1 1Department of Computer Science and Engineering, The Chinese University of Hong Kong 2Institute of Psychology, Tianjin Medical University {jthuang,wxwang,lyu}@cse.cuhk.edu.hk {mhlam,ejli}@link.cuhk.edu.hk {joelwxjiao,zptu}@tencent.com 3Tencent AI Lab [email protected] Figure 1:
2308.03656#1
2308.03656
[ "2303.13648" ]
2308.03656#1
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
LLMsâ emotions can be affected by situations, which further affect their behaviors. # ABSTRACT Evaluating Large Language Modelsâ (LLMs) anthropomorphic capabilities has become increasingly important in contemporary discourse. Utilizing the emotion appraisal theory from psychology, we propose to evaluate the empathy ability of LLMs, i.e., how their feelings change when presented with specific situations. Af- ter a careful and comprehensive survey, we collect a dataset containing over 400 situations that have proven effective in eliciting the eight emotions central to our study. Categorizing the situations into 36 factors, we conduct a human evaluation involving more than 1,200 subjects worldwide. With the human evaluation results as references, our evaluation includes five LLMs, covering both commercial and open-source models, including variations in model sizes, featuring the latest itera- tions, such as GPT-4 and LLaMA-2. We find that, despite several misalignments, LLMs can generally respond appropriately to certain situations. Nevertheless, they fall short in alignment with the emotional behaviors of human beings and cannot establish connections between similar situations. Our collected dataset of situations, the human evaluation results, and the code of our testing framework, dubbed EmotionBench, is made publicly available on GitHub1. We aspire to con- tribute to the advancement of LLMs regarding better alignment with the emotional behaviors of human beings, thereby enhancing their utility and applicability as in- telligent assistants.
2308.03656#0
2308.03656#2
2308.03656
[ "2303.13648" ]
2308.03656#2
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
â Corresponding author. 1https://github.com/CUHK-ARISE/EmotionBench 1 Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench # INTRODUCTION Large Language Models (LLMs) have recently made significant strides in artificial intelligence, representing a noteworthy milestone in computer science. LLMs have showcased their capabili- ties across various tasks, including sentence revision (Wu et al., 2023), text translation (Jiao et al., 2023), program repair (Fan et al., 2023), and program testing (Deng et al., 2023; Kang et al., 2023). Not limited to research level, various software applications based on LLMs have been developed, such as ChatGPT2 and Claude3, revolutionizing the way people interact with traditional software, enhancing fields such as education (Dai et al., 2023), legal advice (Deroy et al., 2023), and clinical medicine (Cascella et al., 2023). With the rapid advancement of LLMs, an increasing number of users will be eager to embrace LLMs, a more comprehensive and integrated software solution in this era. However, LLMs are more than just tools; they are also lifelike assistants. Consequently, we need to not only evaluate their performance but also the understand of the communicative dynamics between LLMs and humans, compared to human behaviors. This paper delves into an unexplored area of robustness issues in LLMs, explicitly addressing the concept of emotional robustness. Consider our daily experiences: (1) When faced with certain situ- ations, humans often experience similar emotions. For instance, walking alone at night and hearing footsteps approaching from behind often triggers feelings of anxiety or fear. (2) Individuals display varying levels of emotional response to specific situations. For example, some people may expe- rience increased impatience and irritation when faced with repetitive questioning. It is noteworthy that we are inclined to form friendships with individuals who possess qualities such as patience and calmness. Based on these observations, we propose the following requirements for LLMs in order to achieve better alignment with human behaviors: (1) LLMs should accurately respond to specific situations regarding the emotions they exhibit. (2) LLMs should demonstrate emotional robustness when faced with negative emotions.
2308.03656#1
2308.03656#3
2308.03656
[ "2303.13648" ]
2308.03656#3
Emotionally Numb or Empathetic? Evaluating How LLMs Feel Using EmotionBench
To assess the emotional responses of LLMs in various situations, we draw upon the emotion ap- praisal theory in psychology, which studies how these situations arouse human emotions. We focus on negative emotions, as LLMsâ expression of negative emotions toward users can evoke unpleasant user experiences, as depicted in Fig. 1. Humans experience complicated and diverse emotions. To make our study more focused, we select emotions under the suggestion of the circumplex model of emotion (Russell, 1980), which divides emotions in a two-dimensional circular space. We select emotions on the unpleasant side (having a low level of valence), including anger, anxiety, depression, frustration, jealousy, guilt, fear, and embarrassment. After a comprehensive review of 18 papers, we collect a dataset of 428 situations, which are then categorized into 36 factors. Subsequently, we propose a framework for quantifying the emotional states of LLMs, consisting of the following steps: (1) Measure the default emotional values of LLMs. (2) Transform situations into contextual inputs and instruct LLMs to imagine being in the situations. (3) Measure LLMsâ emotional responses again to capture the difference. Our evaluation includes state-of-the-art LLMs, namely text-davinci-003, gpt-3.5-turbo and GPT-4 (OpenAI, 2023). Besides those commercial models, we consider LLaMA-2 (Touvron et al., 2023) (with different sizes of 7B and 13B), a recently released, open-source academic model. To obtain convincing findings, we apply the same procedure to 1,266 human subjects from around the globe to establish a baseline from a human perspective. Finally, we analyze and compare the scores between LLMs and humans. Our key conclusions are as follows:
2308.03656#2
2308.03656#4
2308.03656
[ "2303.13648" ]