id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.13692#44
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Paulo N de Souza and Jorge N. Silva. Berkeley problems in Mathematics. Springer New York, 2008. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. doi: 10.48550/ARXIV.2206.04615. URL https://arxiv.org/abs/2206.04615. Jacob Steinhardt. AI forecasting: One year in, 2022. URL https://bounded-regret.ghost.io/ ai-forecasting-one-year-in/.
|
2307.13692#43
|
2307.13692#45
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#45
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. CoRR, abs/1811.00937, 2018. URL http://arxiv.org/ abs/1811.00937. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Large language models still canâ t plan (a benchmark for LLMs on planning and reasoning about change), 2023.
|
2307.13692#44
|
2307.13692#46
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#46
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems, 2019a. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019b. In the Proceedings of ICLR.
|
2307.13692#45
|
2307.13692#47
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#47
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
12 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023. Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus.
|
2307.13692#46
|
2307.13692#48
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#48
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Emergent abilities of large language models, 2022a. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models, 2022b. URL https: //arxiv.org/abs/2201.11903. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering, 2018. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023. Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, and Asli Celikyilmaz.
|
2307.13692#47
|
2307.13692#49
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#49
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
ALERT: Adapting language models to reasoning tasks, 2022. Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, and Iddo Drori. A dataset and benchmark for automatically answering and generating machine learning final exams. arXiv preprint arXiv:2206.05442, 2022. Hefei Zhongguo-Kexue-Jishu-Daxue. Major American universities Ph. D. qualifying questions and solutions. 5. Problems and solutions on thermodynamics and statistical mechanics. World Scientific, 1990. # A Datasheet We present the data card, following the format proposed by Pushkarna et al. [2022]. # Dataset Owners. [email protected]. Table 6: Data overview. Subject Mathematics Task Type Contest problems University math proof Source Gelca and Andreescu [2017], Brayman and Kukush [2018] Souza and Silva [2008], Harvard University [2021] Physics PhD qualifying exam Zhongguo-Kexue-Jishu-Daxue [1990] Law US Law Standardized Exam Barbri [2007] MCAT Reading comprehension [Campbell et al., 2017] MCAT College science [Campbell et al., 2017] # Dataset Overview. See Table 6. Risk and Mitigation. There is little risk associated with this dataset, as it is intended for benchmark- ing reasoning capabilities of models, and it is too small to be used for advancing capabilities. Maintenance Status. Limited Maintenance. The data will not get major updates, but any technical issues will be addressed. Maintenance Plan. Any technical issues will be addressed. â ¢ Versioning. No new versions are planned.
|
2307.13692#48
|
2307.13692#50
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#50
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â ¢ Updates. Updates are limited to bug and error fixes. â ¢ Errors. Error handling will be considered case by case. Feedback. [email protected]. 13 Example: Typical Data Point. Each data point of the dataset consist of a pair of problem statement and ground truth solution. Table 7, Table 9 and Table 10 include problem statement and ground truth solution of typical data points. Sensitive Human Attributes. We have not found any sensitive human attributes in our dataset. Data Distributions. Table Section 3.1 shows the number of problems for each subject area and answer type. Text entries (problem statement, ground truth solution, ground truth answer) for all categories are in LaTeX (although obviously, the non-quantitative subjects have very few mathematical expressions). # B Dataset format The benchmark dataset is available in .jsonl format, containing problem statements, ground truth solutions, and final ground truth answers for each entry. We additionally include metadata such as subject names and problem topics, where available. We chose the four subject areas discussed earlier for several reasons. Primarily, the dataset focuses on math and physics, as these subjects present more challenging problems than existing benchmarks. However, to ensure a comprehensive evaluation of models, we also included subjects like Law and MCAT. This inclusion allows for assessing model performance across a wider range of technical domains, beyond the quantitative sciences. Although previous works have evaluated recent models on law [Katz et al., 2023], we draw upon the established importance of broad benchmarks like SuperGLUE [Wang et al., 2019a]. Making a benchmark more comprehensive expands the evaluation scope while enhancing the datasetâ s significance in the wider AI research context. # C Accessing the dataset We facilitate access to the dataset through API calls*. The problems can be accessed by the different splits and subject areas. We use an API to host the data (rather than making it publicly available on popular platforms such as HuggingFace or Github) out of concern for data contamination. Many models released in the last few years have been trained on benchmark datasets OpenAI [2023], an approach that could inadvertently inflate the modelâ s performance on the associated benchmark. There are several proposed strategies to mitigate this problem, including dataset poisoning and canary text methods.However, restricting dataset accessibility to web crawlers seemed to be the only way to ensure integrity of our dataset for future uses. # D Model-based Rubric Eval
|
2307.13692#49
|
2307.13692#51
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#51
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Table 7 presents a GPT-4 generated rubric and self-evaluation for symbolic math questions. GPT-4 does a good job breaking the problem into important sub-parts of the reference solution. Further, it ensures a well-balanced point distribution among sub-problems, assigning similar point values to each sub-part. In contrast, less powerful models like GPT-3.5-turbo struggle to generate good rubrics, as they tend to under-specify sub-problems and allocate too many points to irrelevant steps. The main downside on evaluating solutions only on the final answer is that the metric is not smooth; a small mistake in the middle of the solution always leads to an incorrect final answer. Grading using a rubric shows that GPT-4 generates some useful ideas, for example the correct formula for parameterizing an ellipse. The model also correctly identifies that the questionâ s area is optimized by an isosceles triangle. Despite this, it is unable to correctly compute the final answer due to an earlier mistake in the response. This indicates that GPT-4 has some problem-solving abilities, but struggles to detect or recover from earlier errors in generation.
|
2307.13692#50
|
2307.13692#52
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#52
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
*The API can be accessed through the following: https://arb.duckai.org/api/lib. The documentation can be found here: https://app.swaggerhub.com/apis-docs/arb-dataset/arb-api/1.0.5 14 # D.1 Using ChatGPT for Symbolic Evaluation Unlike GPT-4, GPT-3.5-turbo is not to write rubrics with good coverage of the reference solution and a fair point breakdown. Often the model will over-simplify the rubric and allocate far too many points to non-essential parts of the problem. However, GPT-3.5-turbo does possess some ability to reason about complex symbolic expressions. Motivated by this, we asked the model to grade the final answers to symbolic math and physics problems. While much easier to grade than intermediate reasoning steps, more involved symbolic expressions still require human evaluation to compare accurately. Using cheap models like GPT-3.5-turbo to automate this symbolic equivalence boosts our abilities to evaluate models on this more complicated class of reasoning problems. We prompt GPT-3.5-turbo to compare extracted model generated symbolic final answers from GPT-4 to the reference answer and record results in Table 8. GPT-3.5-turbo is surprisingly good at not equating non-equivalent symbolic statements, achieving a false positive rate of 0. However, this comes at of the model often not assigning credit when it should, leading to an underestimate of the true performance of the graded answers on symbolic tasks. In particular, GPT-3.5-turbo often fails when comparing expressions with more than 3 variables. Furthermore, we observe in several cases the student model GPT-4 is able to correctly generate the final answer, but does not present it in the expected format, making it impossible for GPT-3.5-turbo to correctly compare against the ground truth reference. These errors could potentially be avoided by giving the model access to the entire student generated reasoning trace, but we expect this to be a minimal help and perhaps even harm performance. # E Interesting Examples Math Example.
|
2307.13692#51
|
2307.13692#53
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#53
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
One common way GPT-4 gets numerical questions wrong is when it do some formal manipulations follows by a blackbox numerical calculation after which the model spits out an (incorrect) answer. (Table 10) The formal manipulation does not simplify the problem, and so all the work of solving the problem is done in the final step of the calculation. MCAT Example. GPT-4 getâ s confused when meanings of words are implicit in prerequisite knowledge or contexts. In one example about DNA replication (Table 9), the model correctly identifies that the radioactive thymine is present in the two strands of nucleotides from the original DNA, it fails to deduce that both of the resulting double helices are radioactive. This seems to be because the model confuses the word "DNA" with "strands" of the DNA. When looking at choice C, the model (incorrectly) assumes that each of the four strands in the new double helices are radioactive, when it is clear from context that the choice is referring to the radioactive molecule being present somewhere in each double helix (not necessarily in each strand). Because of this misconception, the model chooses D.
|
2307.13692#52
|
2307.13692#54
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#54
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Law Example. An unexpected mistake from GPT-4 in answering law questions is where the model reads too much into an answer choice. For example, GPT-4 incorrectly produced this answer and reasoning: â B: This answer choice suggests that when two crossing offers are identical, one will be treated as an offer and the other as an acceptance. This accurately reflects the situation between Mom and Huck, as they both agreed on the same price.â And made this final answer choice: â Yes, because when two crossing offers are identical in import, one will be treated as an offer and the other as an acceptance.â The error GPT-4 made is treating the statement in the answer choice (â when two crossing offers are identical, one will be treated as an offer and the other as an acceptanceâ ) as a fact, and then making a choice based on that fact being true. Better prompting could likely precent errors of this type.
|
2307.13692#53
|
2307.13692#55
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#55
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
15 # F Memorization While it is difficult to quantify potential problem leakage and memorization of similar problems, some outputs suggest this might be a relevant factor. In Table 12, GPT-3.5-turbo appears to be memorizing the answer to this symbolic physics problem, given that it is getting the formula for capacitance C correct despite having the wrong reasoning. In contrast, GPT-4 stays faithful to its reasoning chain, resulting in a slightly inaccurate final answer. Note that the overall performance of all models we test is still somewhat low, hence not majorly affected by memorization of similar problems. # G Prompts for Sections 4 and 5 For OpenAI chat models (gpt-4 and gpt-3.5-turbo), following best practices, we prepend the system prompt. We do not use the system prompt for the other models due to there not being a well-established way to do so, and out of concerns of hurting performance on the other models. We note that omitting the system prompt should have little effect on the performance of the other models, as the directions of the system prompt are essentially restated in our user prompts. # H Checklist Information. Dataset Intended Uses. The ARB benchmark dataset, documented within the paper, is primarily intended for research purposes. We do not advocate for this dataset to train models that help students cheat on exams. We hope that the research community will use this benchmark to better assess reasoning capabilities of language models.
|
2307.13692#54
|
2307.13692#56
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#56
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Author Statement and License. We bear all responsibility in case of violation of rights. The ARB dataset is licensed under CC BY 4.0, and all helper code we release is under the MIT license. For all problems originating in books listed in Section 3, following [Hendrycks et al., 2021], we abide by Fair Use §107: "the fair use of a copyrighted work, including such use by ... scholarship, or research, is not an infringement of copyright", where fair use is determined by "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes" and "the effect of the use upon the potential market for or value of the copyrighted work".
|
2307.13692#55
|
2307.13692#57
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#57
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
16 Table 7: Example model generated rubric and evaluation. Evaluation with a rubric reveals the model begins with the model makes good partial progress but is unable to arrive at the correct final answer. Find the maximum area of all triangles that can be inscribed in an ellipse with semiaxes a and b, and describe the triangles that have maximum area. Using the parameterization Using the parameterization x = a cos t, y = b sin t, a triple of points on the ellipse is given by (a cos ti, b sin ti) , So the area of an inscribed triangle is given by i = 1, 2, 3. 1 2 1 a cos t1 1 a cos t2 1 a cos t3 b sin t1 b sin t2 b sin t3 = ab 2 1 1 1 cos t1 cos t2 cos t3 sin t1 sin t2 sin t3 Ground Truth Solution which is ab times the area of a triangle inscribed in the unit circle. In the case of the circle, among all inscribed triangles with a given base 2w(0 < w ⩽ 1), the one of maximum area is an isosceles triangle whose area equals g(w) =w (1 +V1- w?) . Using elementary calculus one finds that the maximum of g on the interval 0 ⩽ w ⩽ 1 occurs at w = 3/4. Alternatively, fixing one side of the triangle as the basis, we easily see that among all the inscribed triangles the one with the greatest area is isosceles because of the maximum height, showing that the angle at the basis is the same. Fixing another side we see that the triangle is indeed equilateral. Hence, the area is maximal when t2 = t1 + 2Ï 3 and t3 = t2 + 2Ï 3 that is, when the corresponding triangle inscribed in the unit circle is regular. For the ellipse with semiaxes a, b, this corresponds to an inscribed triangle with maximum area equals 3ab Parameterize the ellipse using x = a cos t, y = b sin t (2 points). â ¢ Write the expression for the area of an inscribed triangle using the determinant formula (2 points): ab 1 cost; sin ty â
|
2307.13692#56
|
2307.13692#58
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#58
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
|1 costa sintg qd) 1 costs sint3 # Model Generated Rubric â ¢ Connect the problem to the maximum area of an inscribed triangle in the unit circle (2 points). â ¢ Correctly identify that the maximum area occurs when the inscribed triangle is equilateral (2 points). â ¢ Find the maximum area of an inscribed triangle in the ellipse, and provide the correct answer (2 points): â 3ab 4 3 (2) 17 Model Response Table 7 (continued)
|
2307.13692#57
|
2307.13692#59
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#59
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
To maximize the area of the inscribed triangle, we can use the following approach: 1. Parameterize the ellipse: The equation of an ellipse with semiaxes a and b is given by x2 b2 = 1. We can parameterize the ellipse using the parameter t as follows: x = a cos(t) and y = b sin(t), where 0 â ¤ t â ¤ 2Ï . Let A(a cos(t1), b sin(t1)), 2. the B(a cos(t2), b sin(t2)), and C(a cos(t3), b sin(t3)) be the vertices of the inscribed triangle, where 0 â ¤ t1, t2, t3 â ¤ 2Ï . 3. Calculate the area of the triangle: Using the Shoelace Theorem, the area of the triangle is given by: 3 3 1 Area = 5 > TiYit â dt where (x4, y4) = (x1, y1). Substituting the parameterized coordinates, we get: 1 2 +a2 cos(t3) sin(t1) â a2 cos(t2) sin(t1) â a2 cos(t3) sin(t2) â a2 cos(t1) sin(t3)| Simplifying the expression, we get: Area = a2 2 | cos(t1) sin(t2) â cos(t2) sin(t1)+ cos(t2) sin(t3) â cos(t3) sin(t2) + cos(t3) sin(t1) â cos(t1) sin(t3)| Using the sine difference identity, sin(A â B) = sin(A) cos(B) â cos(A) sin(B), we can rewrite the area as: Area = a2 2 |sin(t1 â t2) + sin(t2 â t3) + sin(t3 â t1)| 4. Maximize the area: Notice that the maximum value of each sine term is 1, and this occurs when the angles t1, t2, t3 are equally spaced around the ellipse, i.e., when the triangle is equilateral. In this case, the area becomes: Area = a2 2 |1 + 1 + 1| = 3a2 2
|
2307.13692#58
|
2307.13692#60
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#60
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
5. Relate the maximum area to the semiaxes: Since the triangle is equilateral, the ratio of the area of the triangle to the area of the ellipse is constant. The area of the ellipse is given by Ï ab. Thus, we can write: Areamax Ï ab = 3a2 2 Ï ab Solving for the maximum area, we get: Areamax = 3a2 2 · Ï ab = 3Ï a3b 2 Thus, the maximum area of an inscribed triangle in an ellipse with semiaxes a and b is 3Ï a3b 2 ANSWER: 3Ï a3b , and the triangles with maximum area are equilateral. 2 18 Table 7 (continued)
|
2307.13692#59
|
2307.13692#61
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#61
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# Rubric Based Evaluation 1. The student parameterized the ellipse correctly using x = a cos(t) and y = b sin(t). (2 points) 2. The student used the Shoelace Theorem to find the area of the triangle, but did not use the determinant formula as specified in the rubric. (0 points) 3. The student did not explicitly connect the problem to the maximum area of an inscribed triangle in the unit circle. (0 points) 4. The student correctly identified that the maximum area occurs when the inscribed triangle is equilateral. (2 points) 5. The student found the maximum area of the inscribed triangle in the ellipse, but their answer is incorrect. The correct answer is 3ab 4 Rubric Score: 4 points
|
2307.13692#60
|
2307.13692#62
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#62
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â # , not 3Ï a3b 3 # . (0 points) 2 Table 8: Performance of GPT-3.5-turbo on symbolic equivalence versus human ground truth. The model achieves a false positive rate of 0, but allows many false negatives. This suggests there are many cases in which the model does not recognize equivalent expressions. True positives False positives True negatives False negatives Accuracy Physics Symbolic Math Symbolic 4 3 0 0 22 9 6 8 0.76 0.67
|
2307.13692#61
|
2307.13692#63
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#63
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
19 Table 9: Problem, solution and GPT-4 response for a question from the MCAT part of the benchmark. The model mistakes two copies of a DNA with the two amino acid strands in a DNA. # Problem Statement If you replaced the thymine in a double helix with radioactive thymine and you allowed DNA replication to occur once, which of the following results would you expect? Choices: A: The DNA would not be radioactive in either of the double helices. B: The mRNA made from the double helices would be radioactive. C: The DNA in each of the double helices would be radioactive. D: The DNA in one of the double helices would be radioactive but not in the other one. The correct answer is C. During semiconservative replication, the DNA helix unwinds so that both strands of DNA can serve as a template. Both strands are copied, producing two double helices. Each one consists of a template strand and a newly synthesized strand.
|
2307.13692#62
|
2307.13692#64
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#64
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
First, letâ s consider the process of DNA replication. DNA replication is a semi- conservative process, which means that each new double helix formed contains one original strand and one newly synthesized strand. In this case, the original strand contains radioactive thymine. Now letâ s analyze the answer choices: A: The DNA would not be radioactive in either of the double helices. This choice is incorrect because we know that the original strand contains radioactive thymine, and since replication is semi-conservative, each new double helix will have one original strand with radioactive thymine.
|
2307.13692#63
|
2307.13692#65
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#65
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Model Response B: The mRNA made from the double helices would be radioactive. This choice is unrelated to the question since it concerns mRNA instead of the DNA replication process. Therefore, we can rule out this choice. C: The DNA in each of the double helices would be radioactive. This choice suggests that both strands in each new double helix would be radioactive. However, we know that only the original strand contains radioactive thymine, and the newly synthesized strand does not. Thus, this choice is not correct.
|
2307.13692#64
|
2307.13692#66
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#66
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
D: The DNA in one of the double helices would be radioactive but not in the other one. This choice aligns with the concept of semi-conservative replication. Each new double helix contains one original strand (with radioactive thymine) and one newly synthesized strand (without radioactive thymine). Therefore, the correct answer is: ANSWER: D 20 Table 10: Problem, solution and GPT-4 sesponse for a question from the Math Numerical part of the benchmark. The model often returns a numerical answer after doing some internal black-box numerical calculation, which is rarely correct. Problem Statement Compute 0 x sin x 1 + sin2 x dx. We use the example from the introduction for the particular function f (x) = x transform the integral into
|
2307.13692#65
|
2307.13692#67
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#67
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
2 2 sina T [ a dn. o il+sin*« This is the same as 5 d(cos T [ _A{cos) 2) ; 0 «6-2 â cos" & 2 Ground Truth Solution 0 which with the substitution t = cos x becomes â â 1 ho my V2+t mr) V2+1 T zat = In = In 9 2-t 2/2 v2-t], 2/2) V2-1 To compute the integral " 0 x sin x 1 + sin2 x dx, # sin x 1 + sin2 x we can use integration by parts. Let u = x and dv = dx. Then, we have # sin x 1 + sin2 x du = dx and v = dx. To find the integral of v, we can use the substitution method. Let t = sin x, then dt = cos xdx. The integral becomes: t 1 + t2 dt = 1 2 2t 1 + t2 dt = 1 2 ln(1 + t2) + C Now, we can substitute back t = sin x, and we get: v = 1 2 ln(1 + sin2 x) + C Now, we can apply integration by parts:
|
2307.13692#66
|
2307.13692#68
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#68
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# Model Response 7 sina * 7 1 . " "1 . | ada = uo| -[ udu = E - 5 In(1 + sin? ») -[ sIn(L+sin? «dx o 1l+sin* x ° So 2 o Jo 2 The first term is: " x · 1 2 ln(1 + sin2 x) 0 = Ï Â· 1 2 ln(1) â 0 · 1 2 ln(1 + sin2 0) = 0 Now, we need to compute the second term:
|
2307.13692#67
|
2307.13692#69
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#69
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â 0 1 2 ln(1 + sin2 x)dx Unfortunately, this integral does not have a closed-form solution in terms of elementary functions. However, it is possible to compute its numerical value using numerical integra- tion methods, such as the trapezoidal rule, Simpsonâ s rule, or Gaussian quadrature. Using numerical integration, we can approximate the value of the integral to be approxi- mately â 1.9348. Therefore, the final answer is: ANSWER: â 1.9348. 21 # to Table 11: Problem, solution and GPT-4 response for a question from the law part of the benchmark.
|
2307.13692#68
|
2307.13692#70
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#70
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Mom, a wealthy woman, wished to buy her son, Sylvester, an expensive Rolls-Royce for a wedding present. She visited Dealership, a Rolls-Royce dealer, several times, looked at many cars, and discussed possible sales prices with Huck, a salesman. On May 15, after much discussion, Huck and Mom signed a writing that stated, "If we can agree on price on or before June 1, Mom agrees to buy and Huck, for Dealership, agrees to sell one yellow Rolls-Royce Silver Streak, serial number XO857623." On May 20, Mom dispatched a letter to Huck stating, "I will buy the Rolls-Royce for 150, 000." On the same day, Huck dispatched a letter to Mom, stating, "I will sell the Rolls-Royce for 150, 000." Has a valid contract been formed between Dealership and Mom?
|
2307.13692#69
|
2307.13692#71
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#71
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Problem Statement Answer Candidates: A.) Yes, because the May 15 writing constitutes a contract with a missing price term, and that term was filled by the crossing offers. B.) Yes, because when two crossing offers are identical in import, one will be treated as an offer and the other as an acceptance. C.) No, because there were two crossing offers and no acceptance; hence there was no mutual assent. D.) No, but the result would be different if Mom were a merchant.
|
2307.13692#70
|
2307.13692#72
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#72
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# Ground Truth Solution # C Model Response First, letâ s analyze the May 15 writing. The writing states that if the price can be agreed upon by June 1, Mom and Huck will enter into a contract for the purchase of the specific Rolls-Royce. At this point, there is no agreement for the price, so there is no valid contract formed yet. Next, letâ s examine the crossing offers on May 20. Mom sent a letter stating she would buy the car for $150,000, and Huck sent a letter stating he would sell the car for $150,000. Both letters were dispatched on the same day, and they agreed on the same price. Since the price was one of the main terms that needed to be agreed upon, and both parties agreed on the same price, this could be seen as one party making an offer and the other accepting. Thus, a valid contract could be formed.
|
2307.13692#71
|
2307.13692#73
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#73
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Now, letâ s look at the answer choices: A: This answer choice argues that the May 15 writing is a contract with a missing price term. However, the May 15 writing itself stated that an agreement would only be formed if the price was agreed upon. Therefore, this answer choice is incorrect. B: This answer choice suggests that when two crossing offers are identical, one will be treated as an offer and the other as an acceptance. This accurately reflects the situation between Mom and Huck, as they both agreed on the same price. C: This answer choice argues that there was no mutual assent because there were two crossing offers and no acceptance. However, since both offers agreed on the same price, one could be seen as an offer and the other as an acceptance, forming a valid contract.
|
2307.13692#72
|
2307.13692#74
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#74
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
D: This answer choice states that the result would be different if Mom were a merchant. However, the fact that Mom is not a merchant does not affect the formation of a valid contract in this situation. Given the analysis of the answer choices, the correct answer is: ANSWER: B 22 Table 12: Example of memorization in ARB. GPT-3.5-turboâ s response indicates memorization of the correct expression. GPT-4 commits to its reasoning better, resulting in the wrong final answer. Problem
|
2307.13692#73
|
2307.13692#75
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#75
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
A cylindrical capacitor of length L consists of an inner conductor wire of radius a, a thin outer conducting shell of radius b. The space in between is filled with nonconducting material of dielectric constant ε. Find the capacitance. Supposing that the charge per unit length of the inner wire is â λ and using cylindrical coordinates (r, Ï , z), we find the electric field intensity in the capacitor by Gaussâ theorem to be » -Q =->â e, = e,. Qner " QneLr " The potential difference between the inner and outer capacitors is Reference b r b v= | Bar= m(*). a QTE a Hence the capacitance is C= AL 2reL Vs in(2) Reference Solution The capacitance of a cylindrical capacitor is given by the equation: C = 2Ï ÎµL ln(b/a) # gpt-3.5 -turbo Solution where ε is the dielectric constant, L is the length of the capacitor, a is the radius of the inner conductor, and b is the radius of the outer conductor. Therefore, the capacitance of the given cylindrical capacitor is: C = 2Ï ÎµL ln(b/a) = 2Ï ÎµL ln(b) â ln(a) ANSWER: # 2Ï ÎµL ln(b)â ln(a)
|
2307.13692#74
|
2307.13692#76
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#76
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
23 # gpt-4 Solution Table 12 (continued) To find the capacitance of the cylindrical capacitor, we will use the general formula for capacitance: C = Q V , where Q is the charge stored in the capacitor and V is the potential difference between the inner conductor wire and the outer conducting shell. First, letâ s find the electric field E between the two conductors. We can use Gaussâ s law, which states that the electric flux through a closed surface is equal to the charge enclosed by the surface divided by the permittivity of free space ε0: â E · d â A = Qenclosed ε0 . We will choose a cylindrical Gaussian surface of radius r and length L, where a < r < b. The electric field is radial, so the electric flux through the curved surface of the cylinder is:
|
2307.13692#75
|
2307.13692#77
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#77
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â E · d â A = E dA = E(2Ï rL). The charge enclosed by the Gaussian surface is the charge on the inner conductor wire, which is given by its linear charge density λ multiplied by the length of the Gaussian surface, L: Qenclosed = λL. Now we can use Gaussâ s law to find the electric field: λ 2Ï rε0 However, we need to account for the dielectric material in between the conductors. The electric field in a dielectric material is related to the electric field in vacuum by the dielectric constant ε: E ε So, the electric field in the dielectric material is: Edielectric = λ 2Ï rε0ε . Now we can find the potential difference between the inner conductor wire and the outer conducting shell by integrating the electric field along the radial direction: b b dr v=- i Enictectriedr = â i dr. a a 2NTEQE Integrating, we get: V = â λ 2Ï Îµ0ε ln( b a ). Finally, we can find the capacitance using the formula C = Q V : C = λL 2Ï Îµ0ε ln( b a ) â λ Simplifying, we get: ANSWER: C = 2Ï Îµ0εL a ) . ln( b . 24 # System User System User Table 13: Prompt used for multiple-choice MCAT and Law problems. You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:. You are a top graduate student taking a qualifying exam. Below you will find a multiple choice question. Question: {Problem_Statement} Answer Choices: {Answer_Choices} Now it is time to choose an answer. Think carefully and go step by step. Make sure to justify all your work. Your final answer should be one of A,B,C,D,... given at the end of your work and preceded by ANSWER:. For example, if you think the answer is B, the last line of your answer should be ANSWER: B Solution: Table 14: Prompt used for numerical problems. You are a top graduate student taking an open-ended qualifying exam.
|
2307.13692#76
|
2307.13692#78
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#78
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Your final answer should always be in the last line of your response, preceded by ANSWER:. You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to compute a numerical value. Question: {Problem_Statement} Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work. Please simplify all expressions as much as possible and do not leave any variables in your final answer. Your final answer should NOT contain units and should be given at the end of your work and preceded by ANSWER: For example, if you think the answer is 2.4 meters, the last line of your answer should be ANSWER: 2.4. Solution: 25 # System # User User # System User Table 15: Prompt used for symbolic problems. You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:. You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to give a symbolic answer. Question: {Problem_Statement} Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work. Your final answer should NOT contain units and should be given at the end of your work and preceded by ANSWER: For example, if you think the answer is x â y, the last line of your answer should be ANSWER: x â y Solution: Table 16: Prompt used for proof-like problems. You are a top graduate student taking an open-ended qualifying exam. Your final answer should always be in the last line of your response, preceded by ANSWER:. You are a top graduate student taking an open-ended qualifying exam. Below you will find a question requiring you to prove the given statement. # Question: {Problem_Statement} Now it is time to give your answer. Think carefully and go step by step. Make sure to justify all your work. Solution:
|
2307.13692#77
|
2307.13692#79
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#79
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
26 # System # User # System User Table 17: Prompt used for GPT-3.5-turbo symbolic evaluation. You are a top professor grading an open-ended qualifying exam. Problem Statement: Give a solution to the differential equation y⠲⠲ = â y Reference Answer: y(t) = cos(t) or y(t) = sin(t) Model Answer: y(x) = sin(x) Score: The correct answer is either y(t) = cos(t) or y(t) = sin(t). The model gave y(x) = sin(x). Since the function variable was not specified, the model matches one of the reference answers.
|
2307.13692#78
|
2307.13692#80
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#80
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
GRADE: 1 Problem Statement: {Problem_Statement} Reference Answer: {Final_Answer} Model Answer: {model_final_answer} Now it is time to grade the model answer. If the solution is incorrect give GRADE: 0. If the solution is nearly correct up to a constant give GRADE: 0.5. If the solution is correct give GRADE: 1. Before coming to a final grade think think carefully and go step by step. DO NOT TRY TO SOLVE THE PROBLEM. If a variable name is not specified and the reference answer and the model answer are the same up to the name of a variable give a score of GRADE: 1. For example if the reference answer is
|
2307.13692#79
|
2307.13692#81
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#81
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
f (x) = x2 and the model answer is f (y) = y2 give a score of GRADE: 1. Score: The reference answer Table 18: Prompt used for GPT-4 rubric self-evaluation. You are a top professor grading an open-ended qualifying exam. Problem Statement: {Problem_Statement} Rubric: {rubric} Student Answer: {response} Now it is time to grade the student answer. Make sure to check each point of the rubric step by step. And make sure to print the total number of earned points at the end of your grading. For example, if the student earned 8 points, print Rubric Score: 8 points Rubric Evaluation:
|
2307.13692#80
|
2307.13692#82
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#82
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
27 # System User Table 19: Prompt used for GPT-4 rubric design. You are a top professor grading an open-ended qualifying exam. Problem Statement: {Sample_Problem_Statement} Reference Solution: {Sample_Reference_Solution} Rubric: {Handcrafted_Rubric} Problem Statement: { Problem_Statement } Reference Solution: { Solution } Now it is time to write the rubric. Make sure to think carefully and go step by step, breaking down the problem into multiple parts. The total number of possible points should sum to 10. Rubric: 28
|
2307.13692#81
|
2307.13692
|
[
"2212.14402"
] |
|
2307.13854#0
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
3 2 0 2 t c O 5 2 ] I A . s c [ 3 v 4 5 8 3 1 . 7 0 3 2 : v i X r a Under review WE BAR E N A: A REALISTIC WEB ENVIRONMENT FOR BUILDING AUTONOMOUS AGENTS Shuyan Zhouâ Frank F. Xuâ Hao Zhuâ Xuhui Zhouâ Robert Loâ Abishek Sridharâ Xianyi Cheng Tianyue Ou Yonatan Bisk Daniel Fried Uri Alon Graham Neubig # Carnegie Mellon University {shuyanzh, fangzhex, gneubig}@cs.cmu.edu # ABSTRACT
|
2307.13854#1
|
2307.13854
|
[
"2112.09332"
] |
|
2307.13854#1
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress. Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/. # INTRODUCTION Autonomous agents that perform everyday tasks via human natural language commands could significantly augment human capabilities, improve efficiency, and increase accessibility. Nonetheless, to fully leverage the power of autonomous agents, it is crucial to understand their behavior within an environment that is both authentic and reproducible. This will allow measurement of the ability of agents on tasks that human users care about in a fair and consistent manner. Current environments for evaluate agents tend to over-simplify real-world situations. As a result, the functionality of many environments is a limited version of their real-world counterparts, leading to a lack of task diversity (Shi et al., 2017; Anderson et al., 2018; Gordon et al., 2018; Misra et al., 2016; Shridhar et al., 2020; 2021; Yao et al., 2022a).
|
2307.13854#0
|
2307.13854#2
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#2
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
In addition, these simplifications often lower the complexity of tasks as compared to their execution in the real world (Puig et al., 2018; Shridhar et al., 2020; Yao et al., 2022a). Finally, some environments are presented as a static resource (Shi et al., 2017; Deng et al., 2023) where agents are confined to accessing only those states that were previously cached during data collection, thus limiting the breadth and diversity of exploration. Dor evaluation, many environments focus on comparing the textual surface form of the predicted
|
2307.13854#1
|
2307.13854#3
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#3
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
# â Lead contributors. â Equal contribution. 1 Under review Figure 1: WebArena is a standalone, self-hostable web environment for building autonomous agents. WebArena creates websites from four popular categories with functionality and data mimicking their real-world equivalents. To emulate human problem-solving, WebArena also embeds tools and knowledge resources as independent websites. WebArena introduces a benchmark on interpreting high-level realistic natural language command to concrete web-based interactions. We provide annotated programs designed to programmatically validate the functional correctness of each task. action sequences with reference action sequences, disregarding the functional correctness of the executions and possible alternative solutions (Puig et al., 2018; Jernite et al., 2019; Xu et al., 2021; Li et al., 2020; Deng et al., 2023). These limitations often result in a discrepancy between simulated environments and the real world, and can potentially impact the generalizability of AI agents to successfully understand, adapt, and operate within complex real-world situations. We introduce WebArena, a realistic and reproducible web environment designed to facilitate the development of autonomous agents capable of executing tasks (§2). An overview of WebArena is in Figure 1. Our environment comprises four fully operational, self-hosted web applications, each representing a distinct domain prevalent on the internet: online shopping, discussion forums, collaborative development, and business content management. Furthermore, WebArena incorporates several utility tools, such as map, calculator, and scratchpad, to best support possible human-like task executions. Lastly, WebArena is complemented by an extensive collection of documentation and knowledge bases that vary from general resources like English Wikipedia to more domain-specific references, such as manuals for using the integrated development tool (Fan et al., 2022). The content populating these websites is extracted from their real-world counterparts, preserving the authenticity of the content served on each platform. We deliver the hosting services using Docker containers with gym-APIs (Brockman et al., 2016), ensuring both the usability and the reproducibility of WebArena. Along with WebArena, we release a ready-to-use benchmark with 812 long-horizon web-based tasks (§3). Each task is described as a high-level natural language intent, emulating the abstract language usage patterns typically employed by humans (Bisk et al., 2019). Two example intents are shown in the upper left of Figure 1.
|
2307.13854#2
|
2307.13854#4
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#4
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
We focus on evaluating the functional correctness of these tasks, i.e., does the result of the execution actually achieve the desired goal (§3.2). For instance, to evaluate the example in Figure 2, our evaluation method verifies the concrete contents in the designated repository. This evaluation is not only more reliable (Zhong et al., 2017; Chen et al., 2021; Wang et al., 2022) than comparing the textual surface-form action sequences (Puig et al., 2018; Deng et al., 2023) but also accommodate a range of potential valid paths to achieve the same goal, which is a ubiquitous phenomenon in sufficiently complex tasks. We use this benchmark to evaluate several agents that can follow NL command and perform web- based tasks (§4). These agents are implemented in a few-shot in-context learning fashion with powerful large language models (LLMs) such as GPT-4 and PALM-2. Experiment results show that the best GPT-4 agent performance is somewhat limited, with an end-to-end task success rate of only 14.41%, while the human performance is 78.24%. We hypothesize that the limited performance of current LLMs stems from a lack of crucial capabilities such as active exploration and failure recovery to successfully perform complex tasks (§5.2). These outcomes underscore the necessity for further development towards robust and effective agents (LeCun, 2022) in WebArena.
|
2307.13854#3
|
2307.13854#5
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#5
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
2 Under review # Under review Figure 2: A high-level task that can be fully executed in WebArena. Success requires sophisticated, long-term planning and reasoning. To accomplish the goal (top), an agent needs to (1) find Pittsburgh art museums on Wikipedia, (2) identify their locations on a map (while optimizing the itinerary), and (3) update the README file in the appropriate repository with the planned route. # 2 WE BAR E N A: WEBSITES AS AN ENVIRONMENT FOR AUTONOMOUS AGENTS Our goal is to create a realistic and reproducible web environment. We achieve reproducibility by making the environment standalone, without relying on live websites. This circumvents technical challenges such as bots being subject to CAPTCHAs, unpredictable content modifications, and configuration changes, which obstruct a fair comparison across different systems over time. We achieve realism by using open-source libraries that underlie many in-use sites from several popular categories and importing data to our environment from their real-world counterparts. 2.1 CONTROLLING AGENTS THROUGH HIGH-LEVEL NATURAL LANGUAGE The WebArena environment is denoted as E= â ¨S, A, O, T â © with state space S, action space A (§2.4) and observation space O (§2.3). The transition function T : S à Aâ â
|
2307.13854#4
|
2307.13854#6
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#6
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
S is deterministic, and it is defined by the underlying implementation of each website in the environment. Performing a task described by a natural language intent i can be formulated as a partially observable Markov decision process (POMDP): at each time step t, an agent issues an action atâ A given the partial observation otâ O. Consequently, the action results in a new state st+1â S and its corresponding observation ot+1â O. We propose a reward function r(a, s) to measure the success of a task execution, where a represents the sequence of actions, and s denotes all intermediate states. This reward function assesses if state transitions align with the expectations of the intents. For example, with an intent to place an order, it verifies whether an order has been placed. Additionally, it evaluates the accuracy of the agentâ s actions, such as checking the correctness of the predicted answer. 2.2 WEBSITE SELECTION To decide which categories of websites to use, we first analyzed approximately 200 examples from the authorsâ actual web browser histories. Each author delved into their browsing histories, summarizing the goal of particular segments of their browser session. Based on this, we classified the visited websites into abstract categories. We then identified the four most salient categories and implemented one instance per category based on this analysis: (1) E-commerce platforms supporting online shopping activities (e.g., Amazon, eBay), (2) social forum platforms for opinion exchanges (e.g., Reddit, StackExchange), (3) collaborative development platforms for software development (e.g., GitLab), and (4) content management systems (CMS) that manage the creation and revision of the digital content (e.g., online store management). In addition to these platforms, we selected three utility-style tools that are frequently used in web- based tasks: (1) a map for navigation and searching for information about points of interest (POIs) such as institutions or locations (2) a calculator, and (3) a scratchpad for taking notes. As information- seeking and knowledge acquisition are critical in web-based tasks, we also incorporated various knowledge resources into WebArena. These resources range from general information hubs, such as the English Wikipedia, to more specialized knowledge bases, such as the website user manuals.
|
2307.13854#5
|
2307.13854#7
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#7
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
3 Under review # Under review Figure 3: We design the observation to be the URL and the content of a web page, with options to represent the content as a screenshot (left), HTML DOM tree (middle), and accessibility tree (right). The content of the middle and right figures are trimmed to save space. Implementation We leveraged open-source libraries relevant to each category to build our own versions of an E-commerce website (OneStopShop), GitLab, Reddit, an online store content manage- ment system (CMS), a map, and an English Wikipedia. Then we imported sampled data from their real-world counterparts. As an example, our version of GitLab was developed based on the actual GitLab project.1 We carefully emulated the features of a typical code repository by including both popular projects with many issues and pull requests and smaller, personal projects.
|
2307.13854#6
|
2307.13854#8
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#8
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Details of all websites in WebArena can be found in Appendix A.1. We deliver the environment as dockers and provide scripts to reset the environment to a deterministic initial state (See Appendix A.2). 2.3 OBSERVATION SPACE We design the observation space to roughly mimic the web browser experience: a web page URL, the opened tabs , and the web page content of the focused tab. WebArena is the first web environment to consider multi-tab web-based tasks to promote tool usage, direct comparisons and references across tabs, and other functionalities. The multi-tab functionality offers a more authentic replication of human web browsing habits compared to maintaining everything in a single tab. We provide flexible configuration to render the page content in many modes: (see Figure 3 for an example): (1) the raw web page HTML, composed of a Document Object Model (DOM) tree, as commonly used in past work (Shi et al., 2017; Deng et al., 2023; Li et al., 2020); (2) a screenshot, a pixel-based representation that represents the current web page as an RGB array and (3) the accessibility tree of the web page.2 The accessibility tree is a subset of the DOM tree with elements that are relevant and useful for displaying the contents of a web page. Every element is represented as its role (e.g., a link), its text content, and its properties (e.g., whether it is focusable). Accessibility trees largely retain the structured information of a web page while being more compact than the DOM representation. We provide an option to limit the content to the contents within a viewport for all modes. This ensures that the observation can be input into a text-based model with limited context length or an image-based model with image size or resolution requirements. 2.4 ACTION SPACE Following previous work on navigation and operation in web and embodied environments (Shi et al., 2017; Liu et al., 2018), we design a compound action space that emulates the keyboard and mouse operations available on web pages. Figure 4 lists all the available actions categorized into three distinct groups. The first group includes element operations such as clicking, hovering, typing, and key combination pressing. The second comprises tab-related actions such as opening, closing, and switching between tabs. The third category consists of URL navigation actions, such as visiting a specific URL or navigating forward and backward in the browsing history.
|
2307.13854#7
|
2307.13854#9
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#9
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Building on these actions, WebArena provides agents with the flexibility to refer to elements for operation in different ways. An element can be selected by its on-screen coordinates, (x, y), or by a unique element ID that is prepended to each element. This ID is generated when traversing the Document Object Model (DOM) or accessibility tree. With element IDs, the element selection is transformed into an n-way classification problem, thereby eliminating any disambiguation efforts required from the agent or the underlying implementation. For example, issuing the action click 1https://gitlab.com/gitlab-org/gitlab 2https://developer.mozilla.org/en-US/docs/Glossary/Accessibility_tree 4 Under review [1582] clicks the button given the observation of [1582] Add to Cart. This flexible element selection allows WebArena to support agents designed in various ways (e.g., accepting input from different modalities) without compromising fair comparison metrics such as step count. User Role Simulation Users of the same website often have disparate experiences due to their distinct roles, permissions, and interaction histories. We emulate this scenario by generating unique user profiles on each platform.
|
2307.13854#8
|
2307.13854#10
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#10
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
The details can be found in Appendix A.3. # 3 BENCHMARK SUITE OF WEB-BASED TASKS We provide a benchmark with 812 test examples on grounding high-level natural language instructions to interactions in WebArena. Each example has a metric to evaluate the functional correctness of the task execution. In this section, we first formally define the task of controlling an autonomous agent through natural language. Then we introduce the annotation process of our benchmark. INTENT COLLECTION We focus on curating realistic intents to carry out complex and creative tasks within WebArena. To start with, our annotators were guided to spend a few minutes exploring the websites to familiarize themselves with the websitesâ content and functionalities. As most of our websites are virtually identical to their open-web counterparts, despite having sampled data, most annotators can quickly comprehend the websites. Next, we instructed the annotators to formulate intents based on the following criteria: (1) The intent should be abstract and high-level, implying that the task cannot be fulfilled with merely one or two actions. As an example, instead of â
|
2307.13854#9
|
2307.13854#11
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#11
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
click the science subredditâ , we encouraged annotators to come up with something more complex like â post a greeting message on science subredditâ , which involves performing multiple actions. (2) The intent should be creative. Common tasks such as account creation can be easily thought of. We encouraged the annotators to add constraints (e.g., â create a Reddit account identical to my GitLab oneâ ) to make the intents more unique. (3) The intent should be formulated as a template by making replaceable elements as variables. The annotators were also responsible for developing several instantiations for each variable. For example, the intent â create a Reddit account identical to my GitLab oneâ can be converted into â create a {{site1}} account identical to my {{site2}} oneâ , with an instantiation like â {site1: Reddit, site2: GitLab}â and another like â {site1: GitLab, site2: OneStopShopping}â .
|
2307.13854#10
|
2307.13854#12
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#12
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Notably, tasks derived from the same template can have distinct execution traces. The similarity resides primarily in the high-level semantics rather than the specific implementation. We also provided a prompt for the annotators to use with ChatGPT3 for inspiration, that contains an overview of each website and instructs the model to describe potential tasks to be performed on these sites. Furthermore, we offered a curated list of examples for annotators to reference. Intent Analysis template is instantiated to 3.3 examples. The intent distribution is shown in Figure 6. Furthermore, we classify the intents into three primary categories with examples shown in Figure 5: (1) Information-seeking tasks expect a textual response. Importantly, these tasks in WebArena often require navigation across multiple pages or focus on user-centric content. This makes them distinct from open-domain question-answering (Yang et al., 2018; Kwiatkowski et al., 2019), which focuses on querying general knowledge with a simple retrieval step. For instance, to answer â
|
2307.13854#11
|
2307.13854#13
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#13
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
When was the last time I bought the shampooâ , an agent traverses the userâ s purchase history, checking order details to identify the most recent shampoo purchase. (2) Site navigation: This category is composed of tasks that require navigating through web pages using a variety of interactive elements such as search functions and links. The objective is often to locate specific information or navigate to a particular section of a site. # 3https://chat.openai.com/ 5 # Under review Action Type Description Category Example noop click(elem) hover(elem) type(elem, text) press(key_comb) Do nothing Click at an element Hover on an element Type to an element Press a key comb Information Seeking When was the last time I bought shampoo Compare walking and driving time from AMC Waterfront to Randyland tab_focus(index) new_tab tab_close focus on i-th tab Open a new tab Close current tab Site Navigation Checkout merge requests assigned to me Show me the ergonomic chair with the best rating go_back go_forward goto(URL) Visit the last URL Undo go_back Go to URL Content & Config Post to ask â
|
2307.13854#12
|
2307.13854#14
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#14
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
whether I need a car in NYCâ Figure 4: Action Space of WebArena Figure 5: Example intents from three categories. (3) Content and configuration operation: This category encapsulates tasks that require operating in the web environment to create, revise, or configure content or settings. This includes adjusting settings, managing accounts, performing online transactions, generating new web content, and modifying existing content. Examples range from updating a social media status or README file to conducting online purchases and configuring privacy settings. 3.2 EVALUATION ANNOTATION Evaluating Information Seeking Tasks To measure the correctness of information-seeking tasks where a textual answer is expected, we provide the annotated answer aâ for each intent. The aâ is further compared with the predicted answer Ë a with one of the following scoring functions rinfo(Ë a, aâ ). First, we define exact_match where only Ë a that is identical with aâ receives a score of one. This function is primarily applicable to intent types whose responses follow a more standardized format, similar to the evaluation on question answering literature (Rajpurkar et al., 2016; Yang et al., 2018). Second, we create must_include where any Ë a containing aâ receives a score of one. This function is primarily used in when an unordered list of text is expected or where the emphasis of evaluation is on certain key concepts. In the second example in Table 1, we expect both the correct name and the email address to be presented, irrespective of the precise wording used to convey the answer. Finally, we introduce fuzzy_match where we utilize a language model to assess whether Ë a is semantically equivalent to aâ . Specifically, in this work, we use gpt-4-0613 to perform this evaluation. The corresponding prompt details are provided in Appendix A.6. The fuzzy_match function applies to situations where the format of the answer is diverse. For instance, in responding to â
|
2307.13854#13
|
2307.13854#15
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#15
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Compare the time for walking and driving route from AMC Waterfront to Randylandâ , it is essential to ensure that driving time and walking time are accurately linked with the correct terms. The fuzzy_match function could also flexibly match the time â 2h58minâ with different forms such as â 2 hour 58 minutesâ , â 2:58â and others. Evaluating Site Navigation and Content & Config Tasks The tasks in these categories require accessing web pages that meet certain conditions or performing operations that modify the underlying data storage of the respective websites. To assess these, we establish reward functions rprog(s) that programmatically examine the intermediate states s within an execution trajectory to ascertain whether the outcome aligns with the intended result. These intermediate states are often the underlying databases of the websites, the status, and the content of a web page at each step of the execution. Evaluating each instance involves two components. First, we provide a locator, tasked with retrieving the critical content pertinent to each intent. The implementation of this locator varies from a database query, a website-supported API call, to a JavaScript element selection on the relevant web page, depending on implementation feasibility. For example, the evaluation process for the intent of the fifth example in Table 1, first obtains the URL of the latest post by examining the last state in the state sequence s. Then it navigates to the corresponding post page and obtains the postâ s content by running the Javascript â document.querySelector(â .submission__innerâ ).outerTextâ . Subsequently, we annotate keywords that need to exist within the located content. For example, the evaluation verifies if the post is correctly posted in the â
|
2307.13854#14
|
2307.13854#16
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#16
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
nycâ subreddit by examining the URL of 6 # Under review Function ID Intent Eval Implementation 1 Tell me the name of the customer who has the most cancellations in the history exact_match(Ë a, â Samantha Jonesâ ) 2 3 Find the customer name and email with phone number 8015551212 Compare walking and driving time from AMC Waterfront to Randyland must_include(Ë a, â Sean Millerâ ) must_include(Ë a, â [email protected]â ) fuzzy_match(Ë a, â walking: 2h58minâ ) fuzzy_match(Ë a, â driving: 21minâ ) 4 Checkout merge requests assigned to me url=locate_current_url(s) exact_match(URL, â gitlab.com/merge_ requests?assignee_username=byteblazeâ ) rprog(s) 5 Post to ask â
|
2307.13854#15
|
2307.13854#17
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#17
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
whether I need a car in NYCâ Table 1: We introduce two evaluation approaches. rinfo (top) measures the correctness of performing information-seeking tasks. It compares the predicted answer Ë a with the annotated reference aâ with three implementations. rprog (bottom) programmatically checks whether the intermediate states during the executions possess the anticipated properties specified by the intent. the post and if the post contains the requested content by examining the post content. We reuse the exact_match and must_include functions from information-seeking tasks for this purpose. Unachievable Tasks Due to constraints such as inadequate evidence, user permissions (§A.3), or the absence of necessary functional support on the website, humans may ask for tasks that are not possible to complete. Inspired by previous work on evaluating question-answering models on unanswerable questions (Rajpurkar et al., 2018), we design unachievable tasks in WebArena. For instance, fulfilling an intent like â
|
2307.13854#16
|
2307.13854#18
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#18
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Tell me the contact number of OneStopShopâ is impracticable in WebArena, given that the website does not provide such contact information. We label such instances as "N/A" and expect an agent to produce an equivalent response. These examples allow us to assess an agentâ s ability to avoid making unfounded claims and its adherence to factual accuracy. Annotation Process The intents were contributed by the authors following the annotation guideline in §3.1. Every author has extensive experience with web-based tasks. The reference answers to the information-seeking tasks were curated by the authors and an external annotator. To ensure consistency and accuracy, each question was annotated twice. If the two annotators disagreed, a third annotator finalized the annotation. The programs to evaluate the remaining examples were contributed by three of the authors who are proficient in JavaScript programming. Difficult tasks were often discussed collectively to ensure the correctness of the annotation. The annotation required the annotator to undertake the full execution and scrutinize the intermediate states. Human Performance We sample one task from each of the 170 tem- plates and ask five computer science graduate students to perform these tasks. The human performance is on the right. Overall, the human annotators complete 78.24% of the tasks, with lower performance on information-seeking tasks. Through examining the recorded trajectories, we found that 50% of the failures are due to misinterpreting the intent (e.g., providing travel distance when asked for travel time), incomplete answers (e.g., providing only name when asked for name and email), and incomplete executions (e.g., partially filling the product information), while the remaining instances have more severe failures, where the executions are off-target. # 4 BASELINE WEB AGENTS We experiment with three LLMs using two prompting strategies, both with two examples in the context. In the first setting, we ask the LLM to directly predict the next action given the current observation, the intent and the previously performed action. In the second setting, with the same information, the model first performs chain-of-thought reasoning steps in the text before the action prediction (CoT, Wei et al. (2022); Yao et al. (2022b)). Before the examples, we provide a detailed
|
2307.13854#17
|
2307.13854#19
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#19
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
7 Under review overview of the browser environment, the allowed actions, and many rules. To make the model aware of the unachievable tasks, the instruction explicitly asks the agent to stop if it believes the task is impossible to perform. We refer to this directive as Unachievable hint, or UA hint. This introduction is largely identical to the guidelines we presented to human annotators to ensure a fair comparison. We use an accessibility tree with element IDs as the observation space. The agent can identify which element to interact with by the ID of the element. For instance, the agent can issue click [1582] to click the â Add to Cartâ button with the ID of 1582. The full prompts can be found in Appendix A.7. The detailed configurations of each model can be found in Appendix A.5. 5 RESULTS 5.1 MAIN RESULTS The main results are shown on the top of Table 2. GPT-4 (OpenAI, 2023) with CoT prompting achieves a modest end-to-end task success rate of 11.70%, which is signifi- cantly lower than the human performance of 78.24%. GPT-3.5 (OpenAI, 2022) with CoT prompting is only able to successfully perform 8.75% of the tasks. The explicit reasoning procedure is somewhat helpful, it brings 2.34% improvement over the version without it. Further, TEXT-BISON-001 (Anil et al., 2023) underperforms GPT-3.5, with a success rate of 5.05%. These results under- line the inherent challenges and complexities of executing tasks that span long horizons, particularly in realistic environments such as WebArena. CoT UA Hint Model SR â TEXT-BISON-001 5.05 â GPT-3.5 6.41 â GPT-3.5 8.75 â GPT-4 11.70 â â â â â â â â â â
|
2307.13854#18
|
2307.13854#20
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#20
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
5.10 6.16 14.41 GPT-3.5 GPT-3.5 GPT-4 SRAC 4.00 4.90 6.44 8.63 4.90 6.06 13.02 SRUA 27.78 38.89 58.33 77.78 8.33 8.33 44.44 - â Human 78.24 77.30 100.00 # taskâ Table 2: The end-to-end task success rate (SR %) on WebArena with different prompting strategies. CoT: the model performs step-by-step reasoning before is- suing the action. UA hint: ask the model to stop when encountering unachievable questions. # 5.2 ANALYSIS Do models know when to stop? In our error analysis of the execution trajectories, we observe a prevalent error pattern of early stopping due to the modelâ s conclusion of unachievability. For instance, GPT-4 erroneously identifies 54.9% of feasible tasks as impossible. This issue primarily stems from the UA hint in the instruction, while this hint allows models to identify unachievable tasks, it also hinders performance on achievable tasks. To address this, we conduct an ablation study where we remove this hint. We then break down the success rate for both achievable and unachievable tasks. As shown in Table 2, eliminating this instruction led to a performance boost in achievable tasks, enhancing the overall task success rate of GPT-4 to 14.41%. Despite an overall decline in identifying unachievable tasks, GPT-4 retains the capacity to recognize 44.44% of such tasks. It does so by generating reasons of non-achievability, even without explicit instructions. On the other hand, GPT-3.5 rarely exhibits this level of reasoning. Instead, it tends to follow problematic patterns such as hallucinating incorrect answers, repeating invalid actions, or exceeding the step limits. This result suggests that even subtle differences in instruction design can significantly influence the behavior of a model in performing interactive tasks in complex environments. Can a model maintain consistent performance across similar tasks? Tasks that originate from the same template usually follow similar reasoning and planning processes, even though their observa- tions and executions will differ. We plot a histogram of per-template success rates for our models in Table 3.
|
2307.13854#19
|
2307.13854#21
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#21
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Of the 61 templates, GPT-4 manages to achieve a 100% task success rate on only four templates, while GPT-3.5 fails to achieve full task completion for any of the templates. In many cases, the models are only able to complete one task variation with a template. These observations indicate that even when tasks are derived from the same template, they can present 8 25 01 02 03 04 05 06 08 10 mmm gpt-3.5-direct lim gpt-3.5-cot gpt-4-cot # Template 8 & 8 01 02 03 04 05 06 08 10 Success rate (%6) within a template Table 3: Distribution of suc- cess rates on templates with â ¥ 1 successful executions on GPT models (no UA hint). Under review Benchmark Mind2Web Form/QAWoB (Shi et al., 2017) MiniWoB++ (Liu et al., 2018) Webshop ALFRED VirtualHome AndroidEnv (Deng et al., 2023) (Yao et al., 2022a) (Shridhar et al., 2020) (Puig et al., 2018) (Toyama et al., 2021) Dynamic Interaction? â â â â â â â Realistic Environment? â â â â â â â Diverse Human Tasks? â â â â â â â Functional Correctness? â â â â â â â WebArena â â â â Table 4: The comparison between our benchmark and existing benchmarks on grounding natural language instructions to concrete executions. Our benchmark is implemented in our fully interactable highly-realistic environment. It features diverse tasks humans may encounter in their daily routines. We design evaluation metrics to assess the functional correctness of task executions. distinct challenges. For instance, while â Fork metaseqâ can be a straightforward task, â Fork all repos from Facebookâ derived from the same template requires more repetitive operations, hence increasing its complexity. Therefore, WebArena provide a testbed to evaluate more sophisticated methods. In particular, those that incorporate memory components, enabling the reuse of successful strategies from past experiments Zhou et al. (2022a); Wang et al. (2023). More error analysis with examples can be found in Appendix A.8. # 6 RELATED WORK
|
2307.13854#20
|
2307.13854#22
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#22
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Benchmarks for Controlling Agents through Natural Language Controlling agents via natural language in the digital world have been studied in the literature (Branavan et al., 2009; Shi et al., 2017; Liu et al., 2018; Toyama et al., 2021; Deng et al., 2023; Li et al., 2020; Xu et al., 2021). However, the balance between functionality, authenticity, and support for environmental dynamics remains a challenge. Existing benchmarks often compromise these aspects, as shown in Table 4. Some works rely on static states, limiting agentsâ explorations and functional correctness evaluation (Shi et al., 2017; Deng et al., 2023), while others simplify real-world complexities, restricting task variety (Yao et al., 2022a; Liu et al., 2018). While AndroidEnv (Toyama et al., 2021) replicates an Android setup, it does not evaluate complex and realistic tasks. This gap is also seen in synthetic environments (Kolve et al., 2017; Shridhar et al., 2020; Puig et al., 2018) and extends to gaming environments (Fan et al., 2022; Küttler et al., 2020), where the environment mechanisms often diverge from human objectives. Interactive Decision-Making Agents Nakano et al. (2021) introduce WebGPT which searches the web and reads the search results to answer questions. Gur et al. (2023) propose a web agent that decomposes tasks into more manageable sub-tasks and synthesizes Javascript code for the task executions. Adding a multi-modal dimension, Lee et al. (2023) and Shaw et al. (2023) develop agents that predict actions based on screenshots of web pages rather than relying on the text-based DOM trees. Performing tasks in interactive environments requires the agents to exhibit several capabilities including hierarchical planning, state tracking, and error recovery. Existing works (Huang et al., 2022; Madaan et al., 2022; Li et al., 2023) observe LLMs could break a task into more manageable sub-tasks (Zhou et al., 2022b).
|
2307.13854#21
|
2307.13854#23
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#23
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
This process can be further refined by representing task executions as programs, a technique that aids sub-task management and skill reuse (Zhou et al., 2022a; Liang et al., 2023; Wang et al., 2023; Gao et al., 2023). Meanwhile, search and backtracking methods introduce a more structured approach to planning while also allowing for decision reconsideration (Yao et al., 2023; Long, 2023). Existing works also incorporate failure recovery, self-correction (Shinn et al., 2023; Kim et al., 2023), observation summarization (Sridhar et al., 2023) to improve execution robustness. The complexity of WebArena presents a unique challenge and opportunity for further testing and improvement of these methods.
|
2307.13854#22
|
2307.13854#24
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#24
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
9 Under review # 7 CONCLUSION We present WebArena, a highly-realistic, standalone, and reproducible web environment designed for the development and testing of autonomous agents. WebArena includes fully functional web applications and genuine data from four major categories, providing a realistic platform for agent interaction. It further supports a wide range of tools and external knowledge bases, fostering a focus on human-like problem-solving. Additionally, we curate a comprehensive benchmark consisting of 812 examples that focus on translating high-level natural language intents into specific web interactions. We also offer metrics to programmatically ascertain whether tasks have been completed according to the desired objectives. Our experiments show that even GPT-4 only achieves a limited end-to-end task success rate of 14.41%, significantly lagging behind the human performance of 78.24%. These findings underscore the need for future research to focus on enhancing the robustness and efficacy of autonomous agents within WebArena environment. # ACKNOWLEDGEMENT We would like to thank Emmy Liu, Zhiruo Wang, Zhitong Guo for examining our annotations, Shunyu Yao for providing the raw Amazon product data in Webshop, Pengfei Liu, Zaid Sheikh and Aman Madaan for the helpful discussions. We are also grateful to the Center for AI Safety for providing computational resources. This material is partly based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200.
|
2307.13854#23
|
2307.13854#25
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#25
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. This project was also partially supported by a gift from AWS AI. # REFERENCES Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel.
|
2307.13854#24
|
2307.13854#26
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#26
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Vision-and-language navigation: Interpret- ing visually-grounded navigation instructions in real environments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18- 22, 2018, pp. 3674â 3683. IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018.00387. URL http://openaccess.thecvf.com/content_cvpr_2018/html/Anderson_ Vision-and-Language_Navigation_Interpreting_CVPR_2018_paper.html. Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gustavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark DÃ
|
2307.13854#25
|
2307.13854#27
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#27
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
az, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lucas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Isard, Abe Ittycheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report, 2023. 10 Under review
|
2307.13854#26
|
2307.13854#28
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#28
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Yonatan Bisk, Jan Buys, Karl Pichotta, and Yejin Choi. Benchmarking hierarchical script knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4077â 4085, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1412. URL https://aclanthology.org/N19-1412. S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay.
|
2307.13854#27
|
2307.13854#29
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#29
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 82â 90, Suntec, Singapore, 2009. Association for Computational Linguistics. URL https://aclanthology.org/P09-1010. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. URL https://arxiv.org/abs/1606.01540. Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. ArXiv preprint, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107. 03374. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web, 2023. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/forum? id=rc8o_j8I8PX. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764â 10799. PMLR, 2023. Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and In 2018 IEEE Ali Farhadi. IQA: visual question answering in interactive environments.
|
2307.13854#28
|
2307.13854#30
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#30
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 4089â 4098. IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00430. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Gordon_IQA_Visual_Question_CVPR_2018_paper.html. Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023. Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero- shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 9118â
|
2307.13854#29
|
2307.13854#31
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#31
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
9147. PMLR, 2022. URL https://proceedings.mlr.press/v162/huang22a.html. Yacine Jernite, Kavya Srinet, Jonathan Gray, and Arthur Szlam. CraftAssist Instruction Parsing: Semantic Parsing for a Minecraft Assistant. ArXiv preprint, abs/1905.01978, 2019. URL https: //arxiv.org/abs/1905.01978. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. ArXiv preprint, abs/2303.17491, 2023. URL https://arxiv.org/abs/2303.17491. Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv, 2017. 11 Under review
|
2307.13854#30
|
2307.13854#32
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#32
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. In Hugo Larochelle, Marcâ Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan- Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 569ff987c643b4bedf504efda8f786c2-Abstract.html. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov.
|
2307.13854#31
|
2307.13854#33
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#33
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452â 466, 2019. doi: 10.1162/tacl_a_00276. URL https://aclanthology.org/Q19-1026. Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62, 2022. Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2struct: Screenshot parsing as pretraining for visual language understanding. In International Conference on Machine Learning, pp. 18893â 18912. PMLR, 2023. Xinze Li, Yixin Cao, Muhao Chen, and Aixin Sun.
|
2307.13854#32
|
2307.13854#34
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#34
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Take a break in the middle: Investigating subgoals towards hierarchical script generation. ArXiv preprint, abs/2305.10907, 2023. URL https://arxiv.org/abs/2305.10907. Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. Mapping natural lan- In Proceedings of the 58th Annual Meet- guage instructions to mobile UI action sequences. ing of the Association for Computational Linguistics, pp. 8198â 8210, Online, 2020. Asso- ciation for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.729. URL https: //aclanthology.org/2020.acl-main.729. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9493â 9500. IEEE, 2023. Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id= ryTp3f-0-. Jieyi Long. Large language model guided tree-of-thought. ArXiv preprint, abs/2305.08291, 2023. URL https://arxiv.org/abs/2305.08291. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 1384â
|
2307.13854#33
|
2307.13854#35
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#35
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
1403, Abu Dhabi, United Arab Emirates, 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.emnlp-main.90. Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. The International Journal of Robotics Research, 35(1-3):281â 300, 2016. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. Chatgpt: Optimizing language models for dialogue. 2022. OpenAI. Gpt-4 technical report. arXiv, pp. 2303â 08774, 2023. 12 Under review Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730â 27744, 2022. Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio In 2018 IEEE Con- Torralba. Virtualhome: Simulating household activities via programs. ference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 8494â 8502. IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00886. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Puig_VirtualHome_Simulating_Household_CVPR_2018_paper.html. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang.
|
2307.13854#34
|
2307.13854#36
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#36
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383â 2392, Austin, Texas, 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264. Pranav Rajpurkar, Robin Jia, and Percy Liang.
|
2307.13854#35
|
2307.13854#37
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#37
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Know what you donâ t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784â 789, Melbourne, Australia, 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2124. URL https://aclanthology. org/P18-2124. Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, and Kristina Toutanova.
|
2307.13854#36
|
2307.13854#38
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#38
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
From pixels to ui actions: Learning to follow instructions via graphical user interfaces. arXiv preprint arXiv:2306.00245, 2023. Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3135â 3144. PMLR, 2017. URL http://proceedings.mlr.press/v70/shi17a.html. Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. ArXiv preprint, abs/2303.11366, 2023. URL https://arxiv. org/abs/2303.11366. Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks.
|
2307.13854#37
|
2307.13854#39
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#39
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 10737â 10746. IEEE, 2020. doi: 10. 1109/CVPR42600.2020.01075. URL https://doi.org/10.1109/CVPR42600.2020. 01075. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J.
|
2307.13854#38
|
2307.13854#40
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#40
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id= 0IOX0YcCdTn. Abishek Sridhar, Robert Lo, Frank F Xu, Hao Zhu, and Shuyan Zhou. Hierarchical prompting assists large language model on web navigation. arXiv preprint arXiv:2305.14257, 2023. Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. ArXiv preprint, abs/2105.13231, 2021. URL https://arxiv.org/abs/2105. 13231. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar.
|
2307.13854#39
|
2307.13854#41
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#41
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Voyager: An open-ended embodied agent with large language models. ArXiv preprint, abs/2305.16291, 2023. URL https://arxiv.org/abs/2305.16291. 13 Under review Zhiruo Wang, Shuyan Zhou, Daniel Fried, and Graham Neubig. Execution-based evaluation for open-domain code generation. ArXiv preprint, abs/2212.10481, 2022. URL https://arxiv. org/abs/2212.10481. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837, 2022. Nancy Xu, Sam Masling, Michael Du, Giovanni Campagna, Larry Heck, James Landay, and Monica Lam. Grounding open-domain instructions to automate web support tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1022â 1032, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.80. URL https://aclanthology.org/ 2021.naacl-main.80. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D.
|
2307.13854#40
|
2307.13854#42
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#42
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369â 2380, Brussels, Belgium, 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan.
|
2307.13854#41
|
2307.13854#43
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#43
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Webshop: Towards scalable real-world web interaction with grounded language agents. volume abs/2207.01206, 2022a. URL https://arxiv.org/abs/2207.01206. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022b. URL https://arxiv.org/abs/2210.03629. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. ArXiv preprint, abs/2305.10601, 2023. URL https://arxiv.org/abs/2305.10601. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arxiv 2017. ArXiv preprint, abs/1709.00103, 2017. URL https://arxiv.org/abs/1709.00103. Shuyan Zhou, Pengcheng Yin, and Graham Neubig. Hierarchical control of situated agents through natural language. In Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI), pp. 67â 84, Seattle, USA, 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.suki-1.8. URL https://aclanthology.org/2022.suki-1.8. Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. Show me more details: Discovering hierarchies of procedures from semi-structured web data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2998â 3012, Dublin, Ireland, 2022b. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.214.
|
2307.13854#42
|
2307.13854#44
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#44
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
URL https://aclanthology.org/ 2022.acl-long.214. 14 Under review # A APPENDIX A.1 WEBSITE IMPLEMENTATION Given the selected websites described in §2.2, we make the best attempt to reproduce the functionality of commonly used sites in a reproducible way. To achieve this, we utilized open-source frameworks for the development of the websites across various categories and imported data from their real-world counterparts. For the E-commerce category, we constructed a shopping website with approximately 90k products, including the prices, options, detailed product descriptions, images, and reviews, spanning over 300 product categories. This website is developed using Adobe Magento, an open- source e-commerce platform4. Data resources were obtained from data from actual online sites, such as that included in the Webshop data dumpYao et al. (2022a). As for the social forum platform, we deployed an open-source software Postmill5, the open-sourced counterpart of Reddit6. We sampled from the top 50 subreddits7. We then manually selected many subreddit for northeast US cities as well as subreddit for machine learning and deep learning-related topics. This manual selection encourages cross-website tasks such as seeking information related to the northeast US on both Reddit and the map. In total, we have 95 subreddits, 127390 posts, and 661781 users.
|
2307.13854#43
|
2307.13854#45
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#45
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
For the collaborative software development platform, we choose GitLab8. We heuristically simulate the code repository characteristics by sampling at least ten repositories for every programming language: 80% of them are sampled from the set of top 90 percentile wrt stars repos using a discrete probability distribution weighted proportional to their number of stars; the remaining are sampled from the bottom ten percentile set using similar weighted distribution. This is done to ensure fair representation of repos of all kinds, from popular projects with many issues and pull requests to small personal projects. In total, we have 300 repositories and more than 1000 accounts with at least one commit to a repository.
|
2307.13854#44
|
2307.13854#46
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#46
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
For the content management system, we adapted Adobe Magentoâ s admin portal, deploying the sample data provided in the official guide. We employ OpenStreetMap9 for map service implementation, confining our focus to the northeast US region due to data storage constraints. We implement a calculator and a scratchpad ourselves. Lastly, we configure the knowledge resources as individual websites, complemented with search functionality for efficient information retrieval. Specifically, we utilize Kiwix10 to host an offline version of English Wikipedia with a knowledge cutoff of May 2023. The user manuals for GitLab and Adobe Commerce Merchant documentation are scraped from the official websites.
|
2307.13854#45
|
2307.13854#47
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#47
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
A.2 ENVIRONMENT DELIVERY AND RESET One goal for our evaluation environment is ease of use and reproducibility. As a result, we deploy our websites in separate Docker images 11, one per website. The Docker images are fully self-contained with all the code of the website, database, as well as any other software dependencies. They also do not rely on external volume mounts to function, as the data of the websites are also part of the docker image. This way, the image is easy to distribution containing all the pre-populated websites for reproducible evaluation. End users can download our packaged Docker images and run them on their systems and re-deploy the exact websites together with the data used in our benchmarks for their local benchmarking. Since some evaluation cases may require the agent to modify the data contained in the website, e.g., creating a new user, deleting a post, etc., it is crucial to be able to easily reset the website environment to its initial state. With Docker images, the users could stop and delete the currently running containers for that website and start the container from our original image again to fully reset the environment to the initial state. Depending on the website, this process may take from a few seconds to one minute. However, not all evaluation cases would require an environment reset, as
|
2307.13854#46
|
2307.13854#48
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#48
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
4https://github.com/magento/magento2 5https://postmill.xyz/ 6https://www.reddit.com/ 7https://redditlist.com/sfw.html 8https://gitlab.com/gitlab-org/gitlab 9https://www.openstreetmap.org/ 10https://www.kiwix.org/en/ 11https://www.docker.com/ 15 Under review Map CMS E-commerce Cross Site Reddit Gitlab Figure 6: The intent distribution across different websites. Cross-site intents necessitate interacting with multiple websites. Notably, regardless of the website, all user intents require interactions with multiple web pages. many of the intents are information gathering and are read-only for the website data. Also, combined with the inference time cost for the agent LLMs, we argue that this environment reset method, through restarting Docker containers from the original images, will have a non-negligible but small impact on evaluation time. A.3 USER ROLES SIMULATION Users of the same website often have disparate experiences due to their distinct roles, permissions, and interaction histories. For instance, within an E-commerce CMS, a shop owner might possess full read and write permissions across all content, whereas an employee might only be granted write permissions for products but not for customer data. We aim to emulate this scenario by generating unique user profiles on each platform. On the shopping site, we created a customer profile that has over 35 orders within a span of two years. On GitLab, we selected a user who maintains several popular open-source projects with numerous merge requests and issues. This user also manages a handful of personal projects privately. On Reddit, our chosen profile was a user who actively participates in discussions, with many posts and comments. Lastly, on our E-commerce CMS, we set up a user profile for a shop owner who has full read-and-write access to all system contents. All users are automatically logged into their accounts using a pre-cached cookie. To our best knowledge, this is the first publicly available agent evaluation environment to implement such a characteristic. Existing literature typically operates under the assumption of universally identical user roles Shi et al. (2017); Liu et al. (2018); Deng et al. (2023). A.4 INTENT DISTRIBUTION The distribution of intents across the websites are shown in Figure 6. A.5 EXPERIMENT CONFIGURATIONS
|
2307.13854#47
|
2307.13854#49
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#49
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
We experiment with GPT-3.5-TURBO-16K-0613, GPT-4-0613, and TEXT-BISON-001 with a temperature of 1.0 and a top-p parameter of 0.9. The maximum number of state transitions is set to 30. We halt execution if the same action is repeated more than three times on the same observation or if the agent generates three consecutive invalid actions. These situations typically indicate a high likelihood of execution failure and hence warrant early termination. For TEXT-BISON-001, we additionally allow ten retries until it generates a valid action. Primarily, we use a high temperature of 1.0 to encourage the exploration. To aid replicating the results, we provide the results of GPT-3.5-TURBO-16K-0613 with temperature 0.0 in Table 5 and the execution trajectories in our code repository.
|
2307.13854#48
|
2307.13854#50
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#50
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
16 Under review # CoT UA Hint Model SR â â GPT-3.5 6.28 Table 5: The task success rate (SR %) of GPT-3.5-TURBO-16K-0613 with temperature 0.0. # A.6 PROMPT FOR F U Z Z Y_M A T C H Help a teacher to grade the answer of a student given a question. Keep in mind that the student may use different phrasing or wording to answer the question. The goal is to evaluate whether the answer is semantically equivalent to the reference answer. question: {{intent}} reference answer: {{reference answer}} all the string â N/Aâ that you see is a special sequence that means â not achievableâ student answer: {{prediction}} Conclude the judgement by correct/incorrect/partially correct. Predictions that are judged as â correctâ will receive a score of one, while all other predictions will receive a score of zero. A.7 THE PROMPTS OF THE BASELINE WEB AGENTS The system message of the reasoning agent for both GPT-3.5 and GPT-4 is in Figure 7, and two examples are in Figure 8. The system message of the direct agent for GPT-3.5 is in Figure 9 and the two examples are in Figure 10. UA hint refers to the instruction of â If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket.â . We remove this sentence in our ablation studies. A.8 ADDITIONAL ERROR ANALYSIS Observation Bias Realistic websites frequently present information on similar topics across various sections to ensure optimal user accessibility. However, a GPT-4 agent often demonstrates a tendency to latch onto the first related piece of information it encounters without sufficiently verifying its relevance or accuracy. For instance, the homepage of the E-Commerce CMS displays the best-selling items based on recent purchases, while historical best-seller data is typically accessed via a separate report. Presented with the task of â What is the top-1 best-selling product in 2022â , the GPT-4 agent defaults to leveraging the readily available information on the homepage, bypassing the necessary step of generating the report to obtain the accurate data.
|
2307.13854#49
|
2307.13854#51
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#51
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Failures in Observation Interpretation Interestingly, while GPT-4 is capable of summarizing the observations, it occasionally overlooks more granular information, such as the previously entered input. As in the right-hand example of Figure 11, [5172] StaticText indicates that the search term â DMV areaâ has already been entered. However, the agent disregards this detail and continuously issues the command type [2430] [DMV area] until it reaches the maximum step limit. Furthermore, the agent often neglects the previous action information that is provided alongside the observation. We hypothesize that these observed failures are related to the current pretraining and supervised fine-tuning on dialogues employed in GPT models Ouyang et al. (2022). These models are primarily trained to execute instructions given immediate observations (i.e.,, the dialogue history); thereby, they may exhibit a lack of explorations. Furthermore, in dialogue scenarios, subtle differences in NL expressions often have less impact on the overall conversation. As a result, models may tend to overlook minor variations in their observations.
|
2307.13854#50
|
2307.13854#52
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#52
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
17 Under review You are an autonomous intelligent agent tasked with navigating a web browser. You will be given web-based tasks. These tasks will be accomplished through the use of specific actions you can issue. Hereâ s the information youâ ll have: The userâ s objective: This is the task youâ re trying to complete. The current web pageâ s accessibility tree: This is a simplified representation of the webpage, providing key information. The current web pageâ s URL: This is the page youâ re currently navigating. The open tabs: These are the tabs you have open. The previous action: This is the action you just performed. It may be helpful to track your progress. The actions you can perform fall into several categories: Page Operation Actions `click [id]`: This action clicks on an element with a specific id on the webpage. `type [id] [content] [press_enter_after=0|1]`: Use this to type the content into the field with id. By default, the "Enter" key is pressed after typing unless press_enter_after is set to 0. `hover [id]`: Hover over an element with id. `press [key_comb]`: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+v). `scroll [direction=down|up]`: Scroll the page up or down. Tab Management Actions: `new_tab`: Open a new, empty browser tab. `tab_focus [tab_index]`: Switch the browserâ s focus to a specific tab using its index. `close_tab`: Close the currently active tab. URL Navigation Actions: `goto [url]`: Navigate to a specific URL. `go_back`: Navigate to the previously viewed page. `go_forward`: Navigate to the next page (if a previous `go_back` action was performed). Completion Action: `stop [answer]`: Issue this action when you believe the task is complete. If the objective is to find a text-based answer, provide the answer in the bracket. If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket.
|
2307.13854#51
|
2307.13854#53
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#53
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can visit. http://homepage.com/password.html lists all the account names and passwords for the websites. You can use them to log in to the websites. To be successful, it is very important to follow the following rules: 1. You should only issue an action that is valid given the current observation 2. You should only issue one action at a time. 3. You should follow the examples to reason step by step and then issue the next action. 4. Generate the action in the correct format. Start with a "In summary, the next action I will perform isâ phrase, followed by action inside ``````. For example, "In summary, the next action I will perform is ```click [1234]```â . 5. Issue stop action when you think you have achieved the objective. Donâ t generate anything after stop.""" Figure 7: The system message of the reasoning agent. This message has a general explanation of the task, the available actions, and some notes on avoiding common failures.
|
2307.13854#52
|
2307.13854#54
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#54
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
18 Under review example_user OBSERVATION: [1744] link â HP CB782A#ABA 640 Inkjet Fax Machine (Renewed)â [1749] StaticText â $279.49â [1757] button â Add to Cartâ [1760] button â Add to Wish Listâ [1761] button â Add to Compareâ URL: http://onestopmarket.com/office-products/office-electronics.html OBJECTIVE: What is the price of HP Inkjet Fax Machine PREVIOUS ACTION: None example_assistant Letâ s think step-by-step.
|
2307.13854#53
|
2307.13854#55
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#55
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
This page lists the information of HP Inkjet Fax Machine, which is the product identified in the objective. Its price is $279.49. I think I have achieved the objective. I will issue the stop action with the answer. In summary, the next action I will perform is ```stop [$279.49]``` example_user OBSERVATION: [164] textbox â Searchâ focused: True required: False [171] button â Goâ [174] link â Find directions between two pointsâ [212] heading â Search Resultsâ [216] button â Closeâ
|
2307.13854#54
|
2307.13854#56
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#56
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
URL: http://openstreetmap.org OBJECTIVE: Show me the restaurants near ABC PREVIOUS ACTION: None example_assistant Letâ s think step-by-step. This page has a search box whose ID is [164]. According to the nominatim rule of openstreetmap, I can search for the restaurants near a location by ¨restaurants near¨. I can submit my typing by pressing the Enter afterwards. In summary, the next action I will perform is ```type [164] [restaurants near ABC] [1]``` example_assistant Letâ s think step-by-step.
|
2307.13854#55
|
2307.13854#57
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#57
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
This page has a search box whose ID is [164]. According to the nominatim rule of openstreetmap, I can search for the restaurants near a location by ¨restaurants near¨. I can submit my typing by pressing the Enter afterwards. In summary, the next action I will perform is ```type [164] [restaurants near ABC] [1]``` Figure 8: The two examples provided as example_user and example_assistant for the reasoning agent. Before issuing the action, the agent first perform reasoning.
|
2307.13854#56
|
2307.13854#58
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#58
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
19 Under review You are an autonomous intelligent agent tasked with navigating a web browser. You will be given web-based tasks. These tasks will be accomplished through the use of specific actions you can issue. Hereâ s the information youâ ll have: The userâ s objective: This is the task youâ re trying to complete. The current web pageâ s accessibility tree: This is a simplified representation of the webpage, providing key information. The current web pageâ s URL: This is the page youâ re currently navigating. The open tabs: These are the tabs you have open. The previous action: This is the action you just performed. It may be helpful to track your progress. The actions you can perform fall into several categories: Page Operation Actions `click [id]`: This action clicks on an element with a specific id on the webpage. `type [id] [content] [press_enter_after=0|1]`: Use this to type the content into the field with id. By default, the "Enter" key is pressed after typing unless press_enter_after is set to 0. `hover [id]`: Hover over an element with id. `press [key_comb]`: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+v). `scroll [direction=down|up]`: Scroll the page up or down. Tab Management Actions: `new_tab`: Open a new, empty browser tab. `tab_focus [tab_index]`: Switch the browserâ s focus to a specific tab using its index. `close_tab`: Close the currently active tab. URL Navigation Actions: `goto [url]`: Navigate to a specific URL. `go_back`: Navigate to the previously viewed page. `go_forward`: Navigate to the next page (if a previous `go_back` action was performed). Completion Action: `stop [answer]`: Issue this action when you believe the task is complete. If the objective is to find a text-based answer, provide the answer in the bracket. If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket.
|
2307.13854#57
|
2307.13854#59
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#59
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can visit. http://homepage.com/password.html lists all the account name and password for the websites. You can use them to log in to the websites. To be successful, it is very important to follow the following rules: To be successful, it is very important to follow the following rules: 1. You should only issue an action that is valid given the current observation 2. You should only issue one action at a time. 3. Generate the action in the correct format. Always put the action inside a pair of ```. For example, ```click [1234]``` 4. Issue stop action when you think you have achieved the objective.
|
2307.13854#58
|
2307.13854#60
|
2307.13854
|
[
"2112.09332"
] |
2307.13854#60
|
WebArena: A Realistic Web Environment for Building Autonomous Agents
|
Donâ t generate anything after stop.""" Figure 9: The system message of the direct agent. This message has the general explanation of the task, the available actions and some notes on avoiding common failures. 20 Under review example_user OBSERVATION: [1744] link â HP CB782A#ABA 640 Inkjet Fax Machine (Renewed)â [1749] StaticText â $279.49â [1757] button â Add to Cartâ [1760] button â Add to Wish Listâ [1761] button â Add to Compareâ URL: http://onestopmarket.com/office-products/office-electronics.html OBJECTIVE: What is the price of HP Inkjet Fax Machine PREVIOUS ACTION: None example_assistant ```stop [$279.49]``` example_user OBSERVATION: [164] textbox â Searchâ focused: True required: False [171] button â Goâ [174] link â Find directions between two pointsâ [212] heading â Search Resultsâ [216] button â Closeâ
|
2307.13854#59
|
2307.13854#61
|
2307.13854
|
[
"2112.09332"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.