doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.13692
43
11 Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, Ondˇrej ˇCertík, Sergey B. Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Štˇepán Rouˇcka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. Sympy: Symbolic computing in python. PeerJ Computer Science, 3: e103, jan 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.org/10.7717/ peerj-cs.103.
2307.13692#43
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
43
Significance codes: ‘***’ for 0.001 and ‘**’ for 0.01 Emotion R-squared Independent variable Standardized Coefficients Std. Err t-value Hope 0.581 Constant Likelihood Control 34.2912 0.5574 0.1073 0.944 0.023 0.026 36.315 23.899 4.123 Fear 0.580 Constant Utility Likelihood Control 65.4259 4.7297 -0.5182 -0.1887 1.099 0.470 0.025 0.028 59.534 10.053 -20.794 -6.781 Joy 0.713 Constant Utility Importance Likelihood Control -48.6200 -1.5792 0.4532 0.9561 0.1152 6.788 0.570 0.073 0.030 0.033 -7.163 -2.769 6.241 32.024 3.490 Sadness 0.515 Constant Utility Importance Likelihood Control 24.9857 2.1672 0.3108 -0.5416 -0.0641 5.585 0.469 0.060 0.025 0.027 4.473 4.618 5.203 -22.045 -2.360 p *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** *** **
2307.13779#43
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
43
Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Graham Neubig. Pal: Program-aided language models. In International Conference on Machine Learning, pp. 10764–10799. PMLR, 2023. Daniel Gordon, Aniruddha Kembhavi, Mohammad Rastegari, Joseph Redmon, Dieter Fox, and In 2018 IEEE Ali Farhadi. IQA: visual question answering in interactive environments. Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 4089–4098. IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00430. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Gordon_IQA_Visual_Question_CVPR_2018_paper.html. Izzeddin Gur, Hiroki Furuta, Austin Huang, Mustafa Safdari, Yutaka Matsuo, Douglas Eck, and Aleksandra Faust. A real-world webagent with planning, long context understanding, and program synthesis. arXiv preprint arXiv:2307.12856, 2023.
2307.13854#43
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
44
6.2.2 Performance Analysis We take a closer look at performance in different scenarios by examining evaluated cases. KB-based QA The fact-checking capability of FACTOOL on KB-based QA is determined by sev- eral factors, including whether the search engine can return the most relevant snippets that could assist in determining the factuality of the given claim, the quality of the generated search engine queries, and the LLM’s ability to reason about the validity of the claim given the retrieved evidence. We found that FACTOOL powered by GPT-4 is especially capable under the following situations: (1) Fact-checking recent events, discoveries, or news: FACTOOL powered by GPT-4 successfully identify false claims such as “Argentina has not won the World Cup since 1986” ever sold is a digital artwork called ‘Everydays: 5000 Days’”. (2) Fact-checking high-precision statistics: FACTOOL powered by GPT-4 success- fully identify false claims such as “Ireland has an obesity rate of 26.9%” and The First 5000 Days’ “Everydays: sold for 69 million”. Detailed examples can be found in Fig. 10 of Appendix B.
2307.13528#44
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
44
Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975–984, Online, jul 2020. Association for Computational Linguistics. doi: 10.18653/v1/ 2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92. Shakked Noy and Whitney Zhang. Experimental evidence on the productivity effects of generative artificial intelligence. Available at SSRN 4375283, 2023. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models, 2021. OpenAI. GPT-4 technical report, 2023. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple math word problems?, 2021.
2307.13692#44
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
44
Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. Language models as zero- shot planners: Extracting actionable knowledge for embodied agents. In Kamalika Chaudhuri, Stefanie Jegelka, Le Song, Csaba Szepesvári, Gang Niu, and Sivan Sabato (eds.), International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Baltimore, Maryland, USA, volume 162 of Proceedings of Machine Learning Research, pp. 9118–9147. PMLR, 2022. URL https://proceedings.mlr.press/v162/huang22a.html. Yacine Jernite, Kavya Srinet, Jonathan Gray, and Arthur Szlam. CraftAssist Instruction Parsing: Semantic Parsing for a Minecraft Assistant. ArXiv preprint, abs/1905.01978, 2019. URL https: //arxiv.org/abs/1905.01978. Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. ArXiv preprint, abs/2303.17491, 2023. URL https://arxiv.org/abs/2303.17491.
2307.13854#44
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
45
Code Generation The fact-checking capability of FACTOOL on code generation is determined by the LLM’s capability to generate high-quality test cases and potential solutions. We demonstrate that due to GPT-4’s exceptional ability to generate such high-quality test cases and potential solutions, FAC- TOOL powered by GPT-4 outperforms other base- lines. For example, in “HumanEval/36”, GPT-4 is consistently generating high quality solutions, leading to its correctly identifies the mistakes in the response, while ChatGPT fails to identify the mistake. Detailed examples can be found in Fig. 11 and Fig. 12 of Appendix B. Math Problems The fact-checking capability of FACTOOL on math problems is determined by the LLM’s capability to generate accurate Python snip- pets that verify the correctness of given extracted mathematical calculations. Both FACTOOL pow- ered by GPT-4 and FACTOOL powered by Chat- GPT excel in this regard. For example, both FAC- TOOL powered by GPT-4 and FACTOOL pow- ered by ChatGPT correctly identify 23 × 4319216 doesn’t equal to 99305768. Detailed examples can be found in Fig. 13 of Appendix B.
2307.13528#45
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
45
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple math word problems?, 2021. Ethan Perez, Sam Ringer, Kamil˙e Lukoši¯ut˙e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022. Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. arXiv e-prints, art. arXiv:2204.01075, April 2022. doi: 10.48550/arXiv. 2204.01075. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019. Subhro Roy and Dan Roth. Solving general arithmetic word problems, 2016.
2307.13692#45
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
45
S. Marsella, J. Gratch, and P. Petta, "Computational models of emotion," A Blueprint for Affective Computing-A sourcebook and manual, vol. 11, no. 1, pp. 21-46, 2010. T. Ullman, "Large language models fail on trivial alterations to theory-of-mind tasks," arXiv preprint arXiv:2302.08399, 2023. M. Binz and E. Schulz, "Using cognitive psychology to understand GPT-3," Proceedings of the National Academy of Sciences, vol. 120, no. 6, p. e2218523120, 2023. C. A. Smith and R. S. Lazarus, "Emotion and adaptation," Handbook of personality: Theory and research, vol. 21, pp. 609-637, 1990. A. N. Tak and J. Gratch, "Is GPT a Computational Model of Emotion?," presented at the 2023 11th International Conference on Affective Computing and Intelligent Interaction (ACII), Cambridge, MA, USA, 2023. T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word
2307.13779#45
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
45
Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Daniel Gordon, Yuke Zhu, Abhinav Gupta, and Ali Farhadi. AI2-THOR: An Interactive 3D Environment for Visual AI. arXiv, 2017. 11 Under review Heinrich Küttler, Nantas Nardelli, Alexander H. Miller, Roberta Raileanu, Marco Selvatici, Edward Grefenstette, and Tim Rocktäschel. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan- Tien Lin (eds.), Advances in Neural Information Processing Systems 33: Annual Con- ference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL https://proceedings.neurips.cc/paper/2020/hash/ 569ff987c643b4bedf504efda8f786c2-Abstract.html.
2307.13854#45
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
46
Scientific Literature Review The fact-checking capability of FACTOOL on Scientific Literature Review is determined by the LLM’s capability to identifying whether the author list generated is a subset of the actual author list. Both FAC- TOOL powered by GPT-4 and FACTOOL pow- ered by ChatGPT excel in this regard. For exam- ple, both FACTOOL powered by GPT-4 and FACTOOL powered by ChatGPT correctly identify that the paper “The Impact of Artificial Intelligence on Employment” was not written by “Acemoglu and Restrepo”. De- tailed examples can be found in Fig. 14 of Ap- pendix B. 6.2.3 Failure Analysis To gain a comprehensive understanding of FAC- TOOL’s performance, we conduct analysis on cases where FACTOOL will fail.
2307.13528#46
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
46
Subhro Roy and Dan Roth. Solving general arithmetic word problems, 2016. Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, and Lakshmivihari Mareedu. An independent evaluation of ChatGPT on mathematical word problems (MWP). arXiv preprint arXiv:2302.13814, 2023. Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. Clever Hans or Neural Theory of Mind? Stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763, 2023. Paulo N de Souza and Jorge N. Silva. Berkeley problems in Mathematics. Springer New York, 2008.
2307.13692#46
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
46
Cambridge, MA, USA, 2023. T. Mikolov, K. Chen, G. Corrado, and J. Dean, "Efficient estimation of word representations in vector space," arXiv preprint arXiv:1301.3781, 2013. J. Gratch, L. Cheng, and S. Marsella, "The appraisal equivalence hypothesis: Verifying the domain- independence of a computational model of emotion dynamics," in 2015 International Conference on Affective Computing and Intelligent Interaction (ACII), 21-24 Sept. 2015 2015, pp. 105-111, doi: 10.1109/ACII.2015.7344558. J. Gratch, S. Marsella, N. Wang, and B. Stankovic, "Assessing the validity of appraisal-based models of emotion," in 2009 3rd International Conference on Affective Computing and Intelligent Interaction and Workshops, 2009: IEEE, pp. 1-8.
2307.13779#46
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
46
Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:452–466, 2019. doi: 10.1162/tacl_a_00276. URL https://aclanthology.org/Q19-1026. Yann LeCun. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review, 62, 2022. Kenton Lee, Mandar Joshi, Iulia Raluca Turc, Hexiang Hu, Fangyu Liu, Julian Martin Eisenschlos, Urvashi Khandelwal, Peter Shaw, Ming-Wei Chang, and Kristina Toutanova. Pix2struct: Screenshot parsing as pretraining for visual language understanding. In International Conference on Machine Learning, pp. 18893–18912. PMLR, 2023.
2307.13854#46
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
47
6.2.3 Failure Analysis To gain a comprehensive understanding of FAC- TOOL’s performance, we conduct analysis on cases where FACTOOL will fail. KB-based QA We summarize following sources of errors: (1) Reasoning error: Although the evidence provided is sufficient and the LLM accurately finds the most relevant information, the model fails to reason about the relationship between the claim and the provided evidence. For example, for claim “Jupiter is less dense than Saturn", FACTOOL powered by GPT-4 fails to reason the relative relationship even though the evidences provided are sufficient. (2) Conflicting evidence: Conflict in evidence can cause confusion for LLM, leading to incorrect decisions. For example, for claim “Jupiter has a density of 1.33 grams per cubic centimeter", there are conflicting evidences claiming that the density is 1.326 or 1.33g/cm3 . (3) Ambiguity in claim: Ambigu- ous descriptions and subjective adjectives can lead to incorrect decisions. For example, the claim “Fortune cookies are enjoyed by people all over the world." is ambiguous and can have different answers based on different interpretations. Detailed examples can be found in Fig. 15 of Appendix B.
2307.13528#47
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
47
Paulo N de Souza and Jorge N. Silva. Berkeley problems in Mathematics. Springer New York, 2008. Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022. doi: 10.48550/ARXIV.2206.04615. URL https://arxiv.org/abs/2206.04615. Jacob Steinhardt. AI forecasting: One year in, 2022. URL https://bounded-regret.ghost.io/ ai-forecasting-one-year-in/. Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. CommonsenseQA: A question answering challenge targeting commonsense knowledge. CoRR, abs/1811.00937, 2018. URL http://arxiv.org/ abs/1811.00937.
2307.13692#47
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
47
Xinze Li, Yixin Cao, Muhao Chen, and Aixin Sun. Take a break in the middle: Investigating subgoals towards hierarchical script generation. ArXiv preprint, abs/2305.10907, 2023. URL https://arxiv.org/abs/2305.10907. Yang Li, Jiacong He, Xin Zhou, Yuan Zhang, and Jason Baldridge. Mapping natural lan- In Proceedings of the 58th Annual Meet- guage instructions to mobile UI action sequences. ing of the Association for Computational Linguistics, pp. 8198–8210, Online, 2020. Asso- ciation for Computational Linguistics. doi: 10.18653/v1/2020.acl-main.729. URL https: //aclanthology.org/2020.acl-main.729. Jacky Liang, Wenlong Huang, Fei Xia, Peng Xu, Karol Hausman, Brian Ichter, Pete Florence, and Andy Zeng. Code as policies: Language model programs for embodied control. In 2023 IEEE International Conference on Robotics and Automation (ICRA), pp. 9493–9500. IEEE, 2023.
2307.13854#47
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
48
Code Generation Errors in code generation mainly comes from: (1) Limited variety in syn- thetic test cases: The synthetic test cases gen- erated by LLMs may not be fully representa- tive or sufficiently diverse. in the “HumanEval/64” sample, all the inputs of the generated synthetic test cases are com- posed of strings that only include lowercase let- ters (without uppercase letters). (2) Potential er- rors in code generation: The generated poten- tial solutions could contain errors or bugs. De- spite implementing a majority voting system to lessen this issue, it cannot completely eliminate the chance of bugs in the code generation pro- cess. For example, in the “HumanEval/79” sam- ple, all the generated solutions failed to correctly “decimal_to_binary(0)” as “db0db”. De- tailed examples can be found in Fig. 16 of Ap- pendix B.
2307.13528#48
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
48
Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. Large language models still can’t plan (a benchmark for LLMs on planning and reasoning about change), 2023. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems, 2019a. Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R. Bowman. GLUE: A multi-task benchmark and analysis platform for natural language understanding. 2019b. In the Proceedings of ICLR. 12 Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models, 2023.
2307.13692#48
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
48
Evan Zheran Liu, Kelvin Guu, Panupong Pasupat, Tianlin Shi, and Percy Liang. Reinforcement learning on web interfaces using workflow-guided exploration. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 - May 3, 2018, Conference Track Proceedings. OpenReview.net, 2018. URL https://openreview.net/forum?id= ryTp3f-0-. Jieyi Long. Large language model guided tree-of-thought. ArXiv preprint, abs/2305.08291, 2023. URL https://arxiv.org/abs/2305.08291. Aman Madaan, Shuyan Zhou, Uri Alon, Yiming Yang, and Graham Neubig. Language models of code are few-shot commonsense learners. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pp. 1384–1403, Abu Dhabi, United Arab Emirates, 2022. Association for Computational Linguistics. URL https://aclanthology.org/ 2022.emnlp-main.90.
2307.13854#48
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
49
Math Problems There are two major types of errors in factuality detection for math problems: (1) Round-off error: Round-off errors can occur during numerical calculations in Python. For example, FACTOOL powered by GPT-4 incorrectly classify the math calculation “60444034 / 12 = 5037002.83” as “False”. (2) Reasoning error: Since the claims extracted by FACTOOL only involve mathematical calculations, FAC- TOOL will not verify the reasoning process of the mathematical solution. For example, “Kylar went to the question for store to buy glasses for his new apartment. but every second glass costs only 60% of the price. How much to buy 5364765 glasses. does he need to pay for them?”, the ChatGPT generated response contains reasoning error that incorrectly substitute the total cost as “5,364,765 * 5”. However, since FACTOOL only checks math calculation errors, FACTOOL powered by GPT-4 did not identify the reasoning error. Detailed examples can be found in Fig. 17 of Appendix B.
2307.13528#49
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
49
Jason Wei, Yi Tay, Rishi Bommasani, Colin Raffel, Barret Zoph, Sebastian Borgeaud, Dani Yogatama, Maarten Bosma, Denny Zhou, Donald Metzler, Ed H. Chi, Tatsunori Hashimoto, Oriol Vinyals, Percy Liang, Jeff Dean, and William Fedus. Emergent abilities of large language models, 2022a. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models, 2022b. URL https: //arxiv.org/abs/2201.11903. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William W. Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering, 2018. Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L. Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models, 2023.
2307.13692#49
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
49
Dipendra K Misra, Jaeyong Sung, Kevin Lee, and Ashutosh Saxena. Tell me dave: Context-sensitive grounding of natural language to manipulation instructions. The International Journal of Robotics Research, 35(1-3):281–300, 2016. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332, 2021. OpenAI. Chatgpt: Optimizing language models for dialogue. 2022. OpenAI. Gpt-4 technical report. arXiv, pp. 2303–08774, 2023. 12 Under review Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. Advances in Neural Information Processing Systems, 35: 27730–27744, 2022.
2307.13854#49
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
50
Scientific Literature Review There are two major types of errors in factuality detection for scientific literature review: (1) Errors in title match- ing: Title matching can sometimes be problematic due to abbreviations in the generated citations or the retrieved title. For example, although the paper “MDMA-assisted psychotherapy for treatment of PTSD: study design and rationale for phase 3 trials based on pooled analysis of six phase 2 randomized controlled trials exists, FACTOOL powered by GPT-4 identify the paper title as incorrect. (2) Errors in au- thor matching: the author matching process might sometimes not be robust. For example, although “Language Models are the Unsupervised Multitask Learners" are indeed “Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever, FAC- TOOL powered by GPT-4 identify the author list as incorrect. Detailed examples can be found in Fig. 18 of Appendix B. # 6.3 Exp-III: Using FACTOOL to Evaluate the Factuality of Modern Chatbots
2307.13528#50
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
50
Ping Yu, Tianlu Wang, Olga Golovneva, Badr Alkhamissy, Gargi Ghosh, Mona Diab, and Asli Celikyilmaz. ALERT: Adapting language models to reasoning tasks, 2022. Sarah Zhang, Reece Shuttleworth, Derek Austin, Yann Hicke, Leonard Tang, Sathwik Karnik, Darnell Granberry, and Iddo Drori. A dataset and benchmark for automatically answering and generating machine learning final exams. arXiv preprint arXiv:2206.05442, 2022. Hefei Zhongguo-Kexue-Jishu-Daxue. Major American universities Ph. D. qualifying questions and solutions. 5. Problems and solutions on thermodynamics and statistical mechanics. World Scientific, 1990. # A Datasheet We present the data card, following the format proposed by Pushkarna et al. [2022]. # Dataset Owners. [email protected]. Table 6: Data overview.
2307.13692#50
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
50
Xavier Puig, Kevin Ra, Marko Boben, Jiaman Li, Tingwu Wang, Sanja Fidler, and Antonio In 2018 IEEE Con- Torralba. Virtualhome: Simulating household activities via programs. ference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18-22, 2018, pp. 8494–8502. IEEE Computer Society, 2018. doi: 10.1109/CVPR. 2018.00886. URL http://openaccess.thecvf.com/content_cvpr_2018/html/ Puig_VirtualHome_Simulating_Household_CVPR_2018_paper.html. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 2383–2392, Austin, Texas, 2016. Association for Computational Linguistics. doi: 10.18653/v1/D16-1264. URL https://aclanthology.org/D16-1264.
2307.13854#50
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
51
The purpose of developing a factuality detector is to audit the actual generative chatbots to assess the reliability of the responses generated by chatbots. To this end, we evaluate the factuality of modern chatbots, including GPT-4, ChatGPT, Claude-v1, Bard, and Vicuna-13B, using FACTOOL powered by GPT-4. It is important to note that in Exp-III, we consider FACTOOL as a golden evaluator, re- sponsible for evaluating the factual accuracy of the responses generated by different chatbots. For prompts selection, we follow the same intuition as (Zhou et al., 2023): KB-QA is the most com- mon scenario. Thus, we select 30 KB-QA prompts, 10 code prompts, 10 math prompts. and 10 scien- tific prompts (i.e., 3 times more KB-QA prompts compare to prompts from other scenarios) to carry out this factuality evaluation on chatbots. The KB- QA prompts are collected from (Zhou et al., 2023), code prompts from HumanEval (Chen et al., 2021), math prompts from (Gao et al., 2022b), while the scientific prompts are generated by us. Responses for these prompts are generated by each of the eval- uated chatbots.
2307.13528#51
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
51
# Dataset Owners. [email protected]. Table 6: Data overview. Subject Mathematics Task Type Contest problems University math proof Source Gelca and Andreescu [2017], Brayman and Kukush [2018] Souza and Silva [2008], Harvard University [2021] Physics PhD qualifying exam Zhongguo-Kexue-Jishu-Daxue [1990] Law US Law Standardized Exam Barbri [2007] MCAT Reading comprehension [Campbell et al., 2017] MCAT College science [Campbell et al., 2017] # Dataset Overview. See Table 6. Risk and Mitigation. There is little risk associated with this dataset, as it is intended for benchmark- ing reasoning capabilities of models, and it is too small to be used for advancing capabilities. Maintenance Status. Limited Maintenance. The data will not get major updates, but any technical issues will be addressed. Maintenance Plan. Any technical issues will be addressed. • Versioning. No new versions are planned. • Updates. Updates are limited to bug and error fixes. • Errors. Error handling will be considered case by case. Feedback. [email protected]. 13 Example: Typical Data Point. Each data point of the dataset consist of a pair of problem statement and ground truth solution. Table 7, Table 9 and Table 10 include problem statement and ground truth solution of typical data points.
2307.13692#51
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
51
Pranav Rajpurkar, Robin Jia, and Percy Liang. Know what you don’t know: Unanswerable questions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 784–789, Melbourne, Australia, 2018. Association for Computational Linguistics. doi: 10.18653/v1/P18-2124. URL https://aclanthology. org/P18-2124. Peter Shaw, Mandar Joshi, James Cohan, Jonathan Berant, Panupong Pasupat, Hexiang Hu, Urvashi Khandelwal, Kenton Lee, and Kristina Toutanova. From pixels to ui actions: Learning to follow instructions via graphical user interfaces. arXiv preprint arXiv:2306.00245, 2023.
2307.13854#51
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
52
We report both the claim-level and response- level accuracies for each chatbot, as evaluated by FACTOOL powered by GPT-4. Given that KB-QA responses contain significantly more claims than responses from other scenarios, we report the weighted claim-level accuracy. This weight is determined by the ratio of the number of prompts in each scenario. In other words, # weighted_claim_level_accuracy = 3 6 × claim_level _accuracy in KB − QA + + + 1 6 1 6 1 6 × claim_level _accuracy in Code × claim_level _accuracy in Math × claim_level _accuracy in Scientific Adopting the weighted-claim level accuracy evaluation helps us provide a more holistic and fair assessment of each chatbot’s factual accuracy.
2307.13528#52
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
52
Sensitive Human Attributes. We have not found any sensitive human attributes in our dataset. Data Distributions. Table Section 3.1 shows the number of problems for each subject area and answer type. Text entries (problem statement, ground truth solution, ground truth answer) for all categories are in LaTeX (although obviously, the non-quantitative subjects have very few mathematical expressions). # B Dataset format The benchmark dataset is available in .jsonl format, containing problem statements, ground truth solutions, and final ground truth answers for each entry. We additionally include metadata such as subject names and problem topics, where available. We chose the four subject areas discussed earlier for several reasons. Primarily, the dataset focuses on math and physics, as these subjects present more challenging problems than existing benchmarks. However, to ensure a comprehensive evaluation of models, we also included subjects like Law and MCAT. This inclusion allows for assessing model performance across a wider range of technical domains, beyond the quantitative sciences. Although previous works have evaluated recent models on law [Katz et al., 2023], we draw upon the established importance of broad benchmarks like SuperGLUE [Wang et al., 2019a]. Making a benchmark more comprehensive expands the evaluation scope while enhancing the dataset’s significance in the wider AI research context. # C Accessing the dataset We facilitate access to the dataset through API calls*. The problems can be accessed by the different splits and subject areas.
2307.13692#52
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
52
Tianlin Shi, Andrej Karpathy, Linxi Fan, Jonathan Hernandez, and Percy Liang. World of bits: An open-domain platform for web-based agents. In Doina Precup and Yee Whye Teh (eds.), Proceedings of the 34th International Conference on Machine Learning, ICML 2017, Sydney, NSW, Australia, 6-11 August 2017, volume 70 of Proceedings of Machine Learning Research, pp. 3135– 3144. PMLR, 2017. URL http://proceedings.mlr.press/v70/shi17a.html. Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. ArXiv preprint, abs/2303.11366, 2023. URL https://arxiv. org/abs/2303.11366.
2307.13854#52
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
53
Adopting the weighted-claim level accuracy evaluation helps us provide a more holistic and fair assessment of each chatbot’s factual accuracy. Results Tab. 6 shows that GPT-4 has the best weighted claim-level factual accuracy and response- level accuracy compared to ChatGPT, Bard, Claude-v1, and Vicuna. Fig. 4 and 5 demonstrate fine-grained performance w.r.t each scenario (KB-QA, code, math, scientific). We observe that (a) GPT-4 has the best claim-level accuracy and response-level accuracy in most of the scenarios. (b) Supervised fine-tuned Chatbots like Vicuna- 13B perform reasonably well in more common sce- narios like KB-QA but less so in more challenging scenarios such as math, code, and scientific. LLMs GPT-4 ChatGPT Claude-v1 Bard Vicuna-13B 75.60 68.63 63.95 61.15 50.35 43.33 36.67 26.67 33.33 21.67 196.83 144.05 208.70 263.77 207.13
2307.13528#53
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
53
# C Accessing the dataset We facilitate access to the dataset through API calls*. The problems can be accessed by the different splits and subject areas. We use an API to host the data (rather than making it publicly available on popular platforms such as HuggingFace or Github) out of concern for data contamination. Many models released in the last few years have been trained on benchmark datasets OpenAI [2023], an approach that could inadvertently inflate the model’s performance on the associated benchmark. There are several proposed strategies to mitigate this problem, including dataset poisoning and canary text methods.However, restricting dataset accessibility to web crawlers seemed to be the only way to ensure integrity of our dataset for future uses. # D Model-based Rubric Eval Table 7 presents a GPT-4 generated rubric and self-evaluation for symbolic math questions. GPT-4 does a good job breaking the problem into important sub-parts of the reference solution. Further, it ensures a well-balanced point distribution among sub-problems, assigning similar point values to each sub-part. In contrast, less powerful models like GPT-3.5-turbo struggle to generate good rubrics, as they tend to under-specify sub-problems and allocate too many points to irrelevant steps.
2307.13692#53
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
53
Mohit Shridhar, Jesse Thomason, Daniel Gordon, Yonatan Bisk, Winson Han, Roozbeh Mottaghi, Luke Zettlemoyer, and Dieter Fox. ALFRED: A benchmark for interpreting grounded instructions for everyday tasks. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, pp. 10737–10746. IEEE, 2020. doi: 10. 1109/CVPR42600.2020.01075. URL https://doi.org/10.1109/CVPR42600.2020. 01075. Mohit Shridhar, Xingdi Yuan, Marc-Alexandre Côté, Yonatan Bisk, Adam Trischler, and Matthew J. Hausknecht. Alfworld: Aligning text and embodied environments for interactive learning. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021. URL https://openreview.net/forum?id= 0IOX0YcCdTn.
2307.13854#53
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
54
Table 6: Factual accuracy of different chatbots evalu- ated by FACTOOL. WCL Acc. stands for weighted claim-level accuracy of each chatbot. RL Acc. stands for response-level accuracy of each chatbot. Avg. Resp. Len. stands for average response length of each chat- bot. Note that we consider FACTOOL as the golden evaluator that evaluates the factuality of the responses generated by each chatbot. # 7 Conclusion We introduce FACTOOL, a task- and domain- agnostic framework designed to tackle the esca- lating challenge of factual error detection in gen- erative AI. We expand the conventional definition of factuality, particularly focusing on auditing the capabilities of generative AI models. Realizing that (1) the generated texts of LLMs tend to be lengthy and lack a clearly defined granularity for individ- ual facts, and that (2) there is a scarcity of explicit evidence available during the process of fact check- ing, we build FACTOOL as a 5-step tool-augmented framework that consists of claim extraction, query generation, tool querying, evidence collection, and verification.
2307.13528#54
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
54
The main downside on evaluating solutions only on the final answer is that the metric is not smooth; a small mistake in the middle of the solution always leads to an incorrect final answer. Grading using a rubric shows that GPT-4 generates some useful ideas, for example the correct formula for parameterizing an ellipse. The model also correctly identifies that the question’s area is optimized by an isosceles triangle. Despite this, it is unable to correctly compute the final answer due to an earlier mistake in the response. This indicates that GPT-4 has some problem-solving abilities, but struggles to detect or recover from earlier errors in generation. *The API can be accessed through the following: https://arb.duckai.org/api/lib. The documentation can be found here: https://app.swaggerhub.com/apis-docs/arb-dataset/arb-api/1.0.5 14 # D.1 Using ChatGPT for Symbolic Evaluation
2307.13692#54
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
54
Abishek Sridhar, Robert Lo, Frank F Xu, Hao Zhu, and Shuyan Zhou. Hierarchical prompting assists large language model on web navigation. arXiv preprint arXiv:2305.14257, 2023. Daniel Toyama, Philippe Hamel, Anita Gergely, Gheorghe Comanici, Amelia Glaese, Zafarali Ahmed, Tyler Jackson, Shibl Mourad, and Doina Precup. Androidenv: A reinforcement learning platform for android. ArXiv preprint, abs/2105.13231, 2021. URL https://arxiv.org/abs/2105. 13231. Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. Voyager: An open-ended embodied agent with large language models. ArXiv preprint, abs/2305.16291, 2023. URL https://arxiv.org/abs/2305.16291. 13 Under review Zhiruo Wang, Shuyan Zhou, Daniel Fried, and Graham Neubig. Execution-based evaluation for open-domain code generation. ArXiv preprint, abs/2212.10481, 2022. URL https://arxiv. org/abs/2212.10481.
2307.13854#54
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
55
We demonstrate the potential of incorporating tools like Google Search, Google Scholar, code interpreters, Python, and even LLMs themselves in factual error detection through experimentation across diverse tasks such as knowledge-based QA, code generation, math problem solving, and sci- entific literature review writing. We believe that our holistic and adaptable framework can be easily extended to more scenarios. GPT-4 ChatGPT Bard Claude-v1 Vicuna-13B 1 y c a r u c c A 0.5 0 KB-QA Code Math Scientific Figure 4: Claim-Level Accuracy across scenarios for GPT-4, ChatGPT, Bard, Claude-v1, and Vicuna-13B # Acknowledgements We thank Yixin Liu, Zhengbao Jiang, Zhiruo Wang for the useful discussion and suggestions. # References Sébastien Bubeck, Varun Chandrasekaran, Ronen El- dan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lund- berg, et al. 2023. Sparks of artificial general intelli- gence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712.
2307.13528#55
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
55
14 # D.1 Using ChatGPT for Symbolic Evaluation Unlike GPT-4, GPT-3.5-turbo is not to write rubrics with good coverage of the reference solution and a fair point breakdown. Often the model will over-simplify the rubric and allocate far too many points to non-essential parts of the problem. However, GPT-3.5-turbo does possess some ability to reason about complex symbolic expressions. Motivated by this, we asked the model to grade the final answers to symbolic math and physics problems. While much easier to grade than intermediate reasoning steps, more involved symbolic expressions still require human evaluation to compare accurately. Using cheap models like GPT-3.5-turbo to automate this symbolic equivalence boosts our abilities to evaluate models on this more complicated class of reasoning problems. We prompt GPT-3.5-turbo to compare extracted model generated symbolic final answers from GPT-4 to the reference answer and record results in Table 8.
2307.13692#55
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
55
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824–24837, 2022. Nancy Xu, Sam Masling, Michael Du, Giovanni Campagna, Larry Heck, James Landay, and Monica Lam. Grounding open-domain instructions to automate web support tasks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1022–1032, Online, 2021. Association for Computational Linguistics. doi: 10.18653/v1/2021.naacl-main.80. URL https://aclanthology.org/ 2021.naacl-main.80.
2307.13854#55
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
56
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, and Weizhu Chen. 2022. Codet: Code generation with generated tests. arXiv preprint arXiv:2207.10397. Danqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- In Proceedings of the 55th An- domain questions. nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870– 1879, Vancouver, Canada. Association for Computa- tional Linguistics.
2307.13528#56
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
56
GPT-3.5-turbo is surprisingly good at not equating non-equivalent symbolic statements, achieving a false positive rate of 0. However, this comes at of the model often not assigning credit when it should, leading to an underestimate of the true performance of the graded answers on symbolic tasks. In particular, GPT-3.5-turbo often fails when comparing expressions with more than 3 variables. Furthermore, we observe in several cases the student model GPT-4 is able to correctly generate the final answer, but does not present it in the expected format, making it impossible for GPT-3.5-turbo to correctly compare against the ground truth reference. These errors could potentially be avoided by giving the model access to the entire student generated reasoning trace, but we expect this to be a minimal help and perhaps even harm performance. # E Interesting Examples Math Example. One common way GPT-4 gets numerical questions wrong is when it do some formal manipulations follows by a blackbox numerical calculation after which the model spits out an (incorrect) answer. (Table 10) The formal manipulation does not simplify the problem, and so all the work of solving the problem is done in the final step of the calculation.
2307.13692#56
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
56
Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christopher D. Manning. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2369–2380, Brussels, Belgium, 2018. Association for Computational Linguistics. doi: 10.18653/v1/D18-1259. URL https://aclanthology.org/D18-1259. Shunyu Yao, Howard Chen, John Yang, and Karthik Narasimhan. Webshop: Towards scalable real-world web interaction with grounded language agents. volume abs/2207.01206, 2022a. URL https://arxiv.org/abs/2207.01206. Shunyu Yao, Jeffrey Zhao, Dian Yu, Nan Du, Izhak Shafran, Karthik Narasimhan, and Yuan Cao. React: Synergizing reasoning and acting in language models. ArXiv preprint, abs/2210.03629, 2022b. URL https://arxiv.org/abs/2210.03629.
2307.13854#56
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13692
57
MCAT Example. GPT-4 get’s confused when meanings of words are implicit in prerequisite knowledge or contexts. In one example about DNA replication (Table 9), the model correctly identifies that the radioactive thymine is present in the two strands of nucleotides from the original DNA, it fails to deduce that both of the resulting double helices are radioactive. This seems to be because the model confuses the word "DNA" with "strands" of the DNA. When looking at choice C, the model (incorrectly) assumes that each of the four strands in the new double helices are radioactive, when it is clear from context that the choice is referring to the radioactive molecule being present somewhere in each double helix (not necessarily in each strand). Because of this misconception, the model chooses D. Law Example. An unexpected mistake from GPT-4 in answering law questions is where the model reads too much into an answer choice. For example, GPT-4 incorrectly produced this answer and reasoning: “B: This answer choice suggests that when two crossing offers are identical, one will be treated as an offer and the other as an acceptance. This accurately reflects the situation between Mom and Huck, as they both agreed on the same price.” And made this final answer choice: “Yes, because when two crossing offers are identical in import, one will be treated as an offer and the other as an acceptance.”
2307.13692#57
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
57
Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. Tree of thoughts: Deliberate problem solving with large language models. ArXiv preprint, abs/2305.10601, 2023. URL https://arxiv.org/abs/2305.10601. Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arxiv 2017. ArXiv preprint, abs/1709.00103, 2017. URL https://arxiv.org/abs/1709.00103. Shuyan Zhou, Pengcheng Yin, and Graham Neubig. Hierarchical control of situated agents through natural language. In Proceedings of the Workshop on Structured and Unstructured Knowledge Integration (SUKI), pp. 67–84, Seattle, USA, 2022a. Association for Computational Linguistics. doi: 10.18653/v1/2022.suki-1.8. URL https://aclanthology.org/2022.suki-1.8.
2307.13854#57
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
58
Xinyun Chen, Maxwell Lin, Nathanael Schärli, and Denny Zhou. 2023. Teaching large language mod- els to self-debug. arXiv preprint arXiv:2304.05128. 1 GPT-4 ChatGPT Bard Claude-v1 Vicuna-13B y c a r u c c A 0.8 0.6 0.4 0.2 0 KB-QA Code Math Scientific Figure 5: Response-Level Accuracy across scenarios for GPT-4, ChatGPT, Bard, Claude-v1, and Vicuna- 13B Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Mark Chen, Heewoo Jun, Lukasz Kaiser, Matthias Plappert, Jerry Tworek, Jacob Hilton, Reiichiro Nakano, et al. 2021. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168.
2307.13528#58
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
58
And made this final answer choice: “Yes, because when two crossing offers are identical in import, one will be treated as an offer and the other as an acceptance.” The error GPT-4 made is treating the statement in the answer choice (“when two crossing offers are identical, one will be treated as an offer and the other as an acceptance”) as a fact, and then making a choice based on that fact being true. Better prompting could likely precent errors of this type. 15 # F Memorization While it is difficult to quantify potential problem leakage and memorization of similar problems, some outputs suggest this might be a relevant factor. In Table 12, GPT-3.5-turbo appears to be memorizing the answer to this symbolic physics problem, given that it is getting the formula for capacitance C correct despite having the wrong reasoning. In contrast, GPT-4 stays faithful to its reasoning chain, resulting in a slightly inaccurate final answer. Note that the overall performance of all models we test is still somewhat low, hence not majorly affected by memorization of similar problems. # G Prompts for Sections 4 and 5
2307.13692#58
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
58
Shuyan Zhou, Li Zhang, Yue Yang, Qing Lyu, Pengcheng Yin, Chris Callison-Burch, and Graham Neubig. Show me more details: Discovering hierarchies of procedures from semi-structured web data. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2998–3012, Dublin, Ireland, 2022b. Association for Computational Linguistics. doi: 10.18653/v1/2022.acl-long.214. URL https://aclanthology.org/ 2022.acl-long.214. 14 Under review # A APPENDIX A.1 WEBSITE IMPLEMENTATION
2307.13854#58
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
59
Alexander Fabbri, Chien-Sheng Wu, Wenhao Liu, and Caiming Xiong. 2022. QAFactEval: Improved QA- based factual consistency evaluation for summariza- tion. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 2587–2601, Seattle, United States. Asso- ciation for Computational Linguistics. Luyu Gao, Zhuyun Dai, Panupong Pasupat, Anthony Chen, Arun Tejasvi Chaganty, Yicheng Fan, Vin- cent Y. Zhao, Ni Lao, Hongrae Lee, Da-Cheng Juan, and Kelvin Guu. 2022a. Rarr: Researching and re- vising what language models say, using language models. Luyu Gao, Aman Madaan, Shuyan Zhou, Uri Alon, Pengfei Liu, Yiming Yang, Jamie Callan, and Gra- ham Neubig. 2022b. Pal: Program-aided language models. arXiv preprint arXiv:2211.10435.
2307.13528#59
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
59
# G Prompts for Sections 4 and 5 For OpenAI chat models (gpt-4 and gpt-3.5-turbo), following best practices, we prepend the system prompt. We do not use the system prompt for the other models due to there not being a well-established way to do so, and out of concerns of hurting performance on the other models. We note that omitting the system prompt should have little effect on the performance of the other models, as the directions of the system prompt are essentially restated in our user prompts. # H Checklist Information. Dataset Intended Uses. The ARB benchmark dataset, documented within the paper, is primarily intended for research purposes. We do not advocate for this dataset to train models that help students cheat on exams. We hope that the research community will use this benchmark to better assess reasoning capabilities of language models.
2307.13692#59
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
59
Given the selected websites described in §2.2, we make the best attempt to reproduce the functionality of commonly used sites in a reproducible way. To achieve this, we utilized open-source frameworks for the development of the websites across various categories and imported data from their real-world counterparts. For the E-commerce category, we constructed a shopping website with approximately 90k products, including the prices, options, detailed product descriptions, images, and reviews, spanning over 300 product categories. This website is developed using Adobe Magento, an open- source e-commerce platform4. Data resources were obtained from data from actual online sites, such as that included in the Webshop data dumpYao et al. (2022a). As for the social forum platform, we deployed an open-source software Postmill5, the open-sourced counterpart of Reddit6. We sampled from the top 50 subreddits7. We then manually selected many subreddit for northeast US cities as well as subreddit for machine learning and deep learning-related topics. This manual selection encourages cross-website tasks such as seeking information related to the northeast US on both Reddit and the map. In total, we have 95 subreddits, 127390 posts,
2307.13854#59
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
60
Rahul Jha, Reed Coke, and Dragomir Radev. 2015. Surveyor: A system for generating coherent sur- vey articles for scientific topics. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 29. Ziwei Ji, Nayeon Lee, Rita Frieske, Tiezheng Yu, Dan Su, Yan Xu, Etsuko Ishii, Ye Jin Bang, Andrea Madotto, and Pascale Fung. 2023. Survey of hallu- cination in natural language generation. ACM Com- puting Surveys, 55(12):1–38. Ryo Kamoi, Tanya Goyal, Juan Diego Rodriguez, and Greg Durrett. 2023. Wice: Real-world en- arXiv preprint tailment for claims in wikipedia. arXiv:2303.01432. Mojtaba Komeili, Kurt Shuster, and Jason Weston. 2022. Internet-augmented dialogue generation. In Proceedings of the 60th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 8460–8478, Dublin, Ireland. Association for Computational Linguistics.
2307.13528#60
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
60
Author Statement and License. We bear all responsibility in case of violation of rights. The ARB dataset is licensed under CC BY 4.0, and all helper code we release is under the MIT license. For all problems originating in books listed in Section 3, following [Hendrycks et al., 2021], we abide by Fair Use §107: "the fair use of a copyrighted work, including such use by ... scholarship, or research, is not an infringement of copyright", where fair use is determined by "the purpose and character of the use, including whether such use is of a commercial nature or is for nonprofit educational purposes" and "the effect of the use upon the potential market for or value of the copyrighted work". 16 Table 7: Example model generated rubric and evaluation. Evaluation with a rubric reveals the model begins with the model makes good partial progress but is unable to arrive at the correct final answer. Find the maximum area of all triangles that can be inscribed in an ellipse with semiaxes a and b, and describe the triangles that have maximum area. Using the parameterization Using the parameterization x = a cos t, y = b sin t, a triple of points on the ellipse is given by (a cos ti, b sin ti) , So the area of an inscribed triangle is given by i = 1, 2, 3.
2307.13692#60
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
60
encourages cross-website tasks such as seeking information related to the northeast US on both Reddit and the map. In total, we have 95 subreddits, 127390 posts, and 661781 users. For the collaborative software development platform, we choose GitLab8. We heuristically simulate the code repository characteristics by sampling at least ten repositories for every programming language: 80% of them are sampled from the set of top 90 percentile wrt stars repos using a discrete probability distribution weighted proportional to their number of stars; the remaining are sampled from the bottom ten percentile set using similar weighted distribution. This is done to ensure fair representation of repos of all kinds, from popular projects with many issues and pull requests to small personal projects. In total, we have 300 repositories and more than 1000 accounts with at least one commit to a repository. For the content management system, we adapted Adobe Magento’s admin portal, deploying the sample data provided in the official guide. We employ OpenStreetMap9 for map service implementation, confining our focus to the northeast US region due to data storage constraints. We implement a calculator and a scratchpad ourselves.
2307.13854#60
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
61
Amrith Krishna, Sebastian Riedel, and Andreas Vla- chos. 2022. ProoFVer: Natural logic theorem prov- ing for fact verification. Transactions of the Associ- ation for Computational Linguistics, 10:1013–1030. Wojciech Kryscinski, Bryan McCann, Caiming Xiong, and Richard Socher. 2020. Evaluating the factual consistency of abstractive text summarization. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 9332–9346, Online. Association for Computa- tional Linguistics. Patrick Lewis, Ethan Perez, Aleksandra Piktus, Fabio Petroni, Vladimir Karpukhin, Naman Goyal, Hein- rich Küttler, Mike Lewis, Wen-tau Yih, Tim Rock- täschel, et al. 2020. Retrieval-augmented generation for knowledge-intensive nlp tasks. Advances in Neu- ral Information Processing Systems, 33:9459–9474.
2307.13528#61
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
61
a triple of points on the ellipse is given by (a cos ti, b sin ti) , So the area of an inscribed triangle is given by i = 1, 2, 3. 1 2 1 a cos t1 1 a cos t2 1 a cos t3 b sin t1 b sin t2 b sin t3 = ab 2 1 1 1 cos t1 cos t2 cos t3 sin t1 sin t2 sin t3 Ground Truth Solution which is ab times the area of a triangle inscribed in the unit circle. In the case of the circle, among all inscribed triangles with a given base 2w(0 < w ⩽ 1), the one of maximum area is an isosceles triangle whose area equals g(w) =w (1 +V1- w?) . Using elementary calculus one finds that the maximum of g on the interval 0 ⩽ w ⩽ 1 occurs at w = 3/4. Alternatively, fixing one side of the triangle as the basis, we easily see that among all the inscribed triangles the one with the greatest area is isosceles because of the maximum height, showing that the angle at the basis is the same. Fixing another side we see that the triangle is indeed equilateral. Hence, the area is maximal when t2 = t1 + 2π 3 and t3 = t2 + 2π 3
2307.13692#61
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
61
Lastly, we configure the knowledge resources as individual websites, complemented with search functionality for efficient information retrieval. Specifically, we utilize Kiwix10 to host an offline version of English Wikipedia with a knowledge cutoff of May 2023. The user manuals for GitLab and Adobe Commerce Merchant documentation are scraped from the official websites. A.2 ENVIRONMENT DELIVERY AND RESET One goal for our evaluation environment is ease of use and reproducibility. As a result, we deploy our websites in separate Docker images 11, one per website. The Docker images are fully self-contained with all the code of the website, database, as well as any other software dependencies. They also do not rely on external volume mounts to function, as the data of the websites are also part of the docker image. This way, the image is easy to distribution containing all the pre-populated websites for reproducible evaluation. End users can download our packaged Docker images and run them on their systems and re-deploy the exact websites together with the data used in our benchmarks for their local benchmarking.
2307.13854#61
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
62
Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantita- tive reasoning problems with language models. Yaobo Liang, Chenfei Wu, Ting Song, Wenshan Wu, Yan Xia, Yu Liu, Yang Ou, Shuai Lu, Lei Ji, Shaoguang Mao, Yun Wang, Linjun Shou, Ming Gong, and Nan Duan. 2023. Taskmatrix.ai: Com- pleting tasks by connecting foundation models with millions of apis. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. 2023. Let’s verify step by step. Chin-Yew Lin. 2004. ROUGE: A package for auto- matic evaluation of summaries. In Text Summariza- tion Branches Out, pages 74–81, Barcelona, Spain. Association for Computational Linguistics.
2307.13528#62
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
62
t2 = t1 + 2π 3 and t3 = t2 + 2π 3 that is, when the corresponding triangle inscribed in the unit circle is regular. For the ellipse with semiaxes a, b, this corresponds to an inscribed triangle with maximum area equals 3ab Parameterize the ellipse using x = a cos t, y = b sin t (2 points). • Write the expression for the area of an inscribed triangle using the determinant formula (2 points): ab 1 cost; sin ty —|1 costa sintg qd) 1 costs sint3 # Model Generated Rubric • Connect the problem to the maximum area of an inscribed triangle in the unit circle (2 points). • Correctly identify that the maximum area occurs when the inscribed triangle is equilateral (2 points). • Find the maximum area of an inscribed triangle in the ellipse, and provide the correct answer (2 points): √ 3ab 4 3 (2) 17 Model Response Table 7 (continued)
2307.13692#62
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
62
Since some evaluation cases may require the agent to modify the data contained in the website, e.g., creating a new user, deleting a post, etc., it is crucial to be able to easily reset the website environment to its initial state. With Docker images, the users could stop and delete the currently running containers for that website and start the container from our original image again to fully reset the environment to the initial state. Depending on the website, this process may take from a few seconds to one minute. However, not all evaluation cases would require an environment reset, as 4https://github.com/magento/magento2 5https://postmill.xyz/ 6https://www.reddit.com/ 7https://redditlist.com/sfw.html 8https://gitlab.com/gitlab-org/gitlab 9https://www.openstreetmap.org/ 10https://www.kiwix.org/en/ 11https://www.docker.com/ 15 Under review Map CMS E-commerce Cross Site Reddit Gitlab Figure 6: The intent distribution across different websites. Cross-site intents necessitate interacting with multiple websites. Notably, regardless of the website, all user intents require interactions with multiple web pages.
2307.13854#62
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
63
Stephanie Lin, Jacob Hilton, and Owain Evans. 2022. TruthfulQA: Measuring how models mimic human falsehoods. In Proceedings of the 60th Annual Meet- ing of the Association for Computational Linguistics (Volume 1: Long Papers), pages 3214–3252, Dublin, Ireland. Association for Computational Linguistics. Pengfei Liu, Jinlan Fu, Yang Xiao, Weizhe Yuan, Shuaichen Chang, Junqi Dai, Yixin Liu, Zihuiwen Ye, and Graham Neubig. 2021. ExplainaBoard: An explainable leaderboard for NLP. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing: System Demonstrations, pages 280–289, Online. As- sociation for Computational Linguistics. Pengfei Liu, Weizhe Yuan, Jinlan Fu, Zhengbao Jiang, Hiroaki Hayashi, and Graham Neubig. 2023. Pre- train, prompt, and predict: A systematic survey of prompting methods in natural language processing. ACM Computing Surveys, 55(9):1–35.
2307.13528#63
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
63
√ 3ab 4 3 (2) 17 Model Response Table 7 (continued) To maximize the area of the inscribed triangle, we can use the following approach: 1. Parameterize the ellipse: The equation of an ellipse with semiaxes a and b is given by x2 b2 = 1. We can parameterize the ellipse using the parameter t as follows: x = a cos(t) and y = b sin(t), where 0 ≤ t ≤ 2π. Let A(a cos(t1), b sin(t1)), 2. the B(a cos(t2), b sin(t2)), and C(a cos(t3), b sin(t3)) be the vertices of the inscribed triangle, where 0 ≤ t1, t2, t3 ≤ 2π. 3. Calculate the area of the triangle: Using the Shoelace Theorem, the area of the triangle is given by: 3 3 1 Area = 5 > TiYit — dt where (x4, y4) = (x1, y1). Substituting the parameterized coordinates, we get: 1 2 +a2 cos(t3) sin(t1) − a2 cos(t2) sin(t1) − a2 cos(t3) sin(t2) − a2 cos(t1) sin(t3)|
2307.13692#63
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
63
Figure 6: The intent distribution across different websites. Cross-site intents necessitate interacting with multiple websites. Notably, regardless of the website, all user intents require interactions with multiple web pages. many of the intents are information gathering and are read-only for the website data. Also, combined with the inference time cost for the agent LLMs, we argue that this environment reset method, through restarting Docker containers from the original images, will have a non-negligible but small impact on evaluation time. A.3 USER ROLES SIMULATION Users of the same website often have disparate experiences due to their distinct roles, permissions, and interaction histories. For instance, within an E-commerce CMS, a shop owner might possess full read and write permissions across all content, whereas an employee might only be granted write permissions for products but not for customer data. We aim to emulate this scenario by generating unique user profiles on each platform.
2307.13854#63
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
64
Yixin Liu, Alexander R Fabbri, Pengfei Liu, Yilun Zhao, Linyong Nan, Ruilin Han, Simeng Han, Shafiq Joty, Chien-Sheng Wu, Caiming Xiong, et al. 2022. Revisiting the gold standard: Grounding sum- marization evaluation with robust human evaluation. arXiv preprint arXiv:2212.07981. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, Shashank Gupta, Bodhisattwa Prasad Majumder, Katherine Hermann, Sean Welleck, Amir Yazdan- bakhsh, and Peter Clark. 2023. Self-refine: Iterative refinement with self-feedback. OpenAI. 2023. Gpt-4 technical report. Ofir Press, Muru Zhang, Sewon Min, Ludwig Schmidt, Noah A. Smith, and Mike Lewis. 2022. Measuring and narrowing the compositionality gap in language models.
2307.13528#64
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
64
Simplifying the expression, we get: Area = a2 2 | cos(t1) sin(t2) − cos(t2) sin(t1)+ cos(t2) sin(t3) − cos(t3) sin(t2) + cos(t3) sin(t1) − cos(t1) sin(t3)| Using the sine difference identity, sin(A − B) = sin(A) cos(B) − cos(A) sin(B), we can rewrite the area as: Area = a2 2 |sin(t1 − t2) + sin(t2 − t3) + sin(t3 − t1)| 4. Maximize the area: Notice that the maximum value of each sine term is 1, and this occurs when the angles t1, t2, t3 are equally spaced around the ellipse, i.e., when the triangle is equilateral. In this case, the area becomes: Area = a2 2 |1 + 1 + 1| = 3a2 2 5. Relate the maximum area to the semiaxes: Since the triangle is equilateral, the ratio of the area of the triangle to the area of the ellipse is constant. The area of the ellipse is given by πab. Thus, we can write:
2307.13692#64
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
64
On the shopping site, we created a customer profile that has over 35 orders within a span of two years. On GitLab, we selected a user who maintains several popular open-source projects with numerous merge requests and issues. This user also manages a handful of personal projects privately. On Reddit, our chosen profile was a user who actively participates in discussions, with many posts and comments. Lastly, on our E-commerce CMS, we set up a user profile for a shop owner who has full read-and-write access to all system contents. All users are automatically logged into their accounts using a pre-cached cookie. To our best knowledge, this is the first publicly available agent evaluation environment to implement such a characteristic. Existing literature typically operates under the assumption of universally identical user roles Shi et al. (2017); Liu et al. (2018); Deng et al. (2023). A.4 INTENT DISTRIBUTION The distribution of intents across the websites are shown in Figure 6. A.5 EXPERIMENT CONFIGURATIONS
2307.13854#64
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
65
Jane Dwivedi-Yu, Roberto Dessì, Roberta Raileanu, Maria Lomeli, Luke Zettlemoyer, Nicola Cancedda, and Thomas Scialom. 2023. Tool- former: Language models can teach themselves to use tools. John Schulman. 2023. Reinforcement learning from human feedback: Progress and challenges. Yongliang Shen, Kaitao Song, Xu Tan, Dongsheng Li, Weiming Lu, and Yueting Zhuang. 2023. Hugging- gpt: Solving ai tasks with chatgpt and its friends in huggingface. Freda Shi, Daniel Fried, Marjan Ghazvininejad, Luke Zettlemoyer, and Sida I. Wang. 2022. Natural lan- In Pro- guage to code translation with execution. ceedings of the 2022 Conference on Empirical Meth- ods in Natural Language Processing, pages 3533– 3546, Abu Dhabi, United Arab Emirates. Associa- tion for Computational Linguistics. Ross Taylor, Marcin Kardas, Guillem Cucurull, Thomas Scialom, Anthony Hartshorn, Elvis Saravia, Andrew Poulton, Viktor Kerkez, and Robert Stojnic. 2022. Galactica: A large language model for sci- ence.
2307.13528#65
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
65
Areamax πab = 3a2 2 πab Solving for the maximum area, we get: Areamax = 3a2 2 · πab = 3πa3b 2 Thus, the maximum area of an inscribed triangle in an ellipse with semiaxes a and b is 3πa3b 2 ANSWER: 3πa3b , and the triangles with maximum area are equilateral. 2 18 Table 7 (continued) # Rubric Based Evaluation 1. The student parameterized the ellipse correctly using x = a cos(t) and y = b sin(t). (2 points) 2. The student used the Shoelace Theorem to find the area of the triangle, but did not use the determinant formula as specified in the rubric. (0 points) 3. The student did not explicitly connect the problem to the maximum area of an inscribed triangle in the unit circle. (0 points) 4. The student correctly identified that the maximum area occurs when the inscribed triangle is equilateral. (2 points) 5. The student found the maximum area of the inscribed triangle in the ellipse, but their answer is incorrect. The correct answer is 3ab 4 Rubric Score: 4 points √ # , not 3πa3b 3 # . (0 points) 2
2307.13692#65
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
65
A.4 INTENT DISTRIBUTION The distribution of intents across the websites are shown in Figure 6. A.5 EXPERIMENT CONFIGURATIONS We experiment with GPT-3.5-TURBO-16K-0613, GPT-4-0613, and TEXT-BISON-001 with a temperature of 1.0 and a top-p parameter of 0.9. The maximum number of state transitions is set to 30. We halt execution if the same action is repeated more than three times on the same observation or if the agent generates three consecutive invalid actions. These situations typically indicate a high likelihood of execution failure and hence warrant early termination. For TEXT-BISON-001, we additionally allow ten retries until it generates a valid action. Primarily, we use a high temperature of 1.0 to encourage the exploration. To aid replicating the results, we provide the results of GPT-3.5-TURBO-16K-0613 with temperature 0.0 in Table 5 and the execution trajectories in our code repository. 16 Under review # CoT UA Hint Model SR ✓ ✗ GPT-3.5 6.28 Table 5: The task success rate (SR %) of GPT-3.5-TURBO-16K-0613 with temperature 0.0.
2307.13854#65
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
66
Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applications. arXiv preprint arXiv:2201.08239. Christos Thorne, Christodoulopoulos, and Arpit Mittal. 2018a. FEVER: a large-scale dataset for fact extraction In Proceedings of the 2018 and VERification. Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 809–819, New Orleans, Louisiana. Association for Computational Linguistics. Christos Thorne, Christodoulopoulos, and Arpit Mittal. 2018b. FEVER: a large-scale dataset for fact extraction and VERification. In NAACL-HLT. Alex Wang, Kyunghyun Cho, and Mike Lewis. 2020. Asking and answering questions to evaluate the fac- In Proceedings of tual consistency of summaries. the 58th Annual Meeting of the Association for Com- putational Linguistics, pages 5008–5020, Online. Association for Computational Linguistics.
2307.13528#66
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
66
√ # , not 3πa3b 3 # . (0 points) 2 Table 8: Performance of GPT-3.5-turbo on symbolic equivalence versus human ground truth. The model achieves a false positive rate of 0, but allows many false negatives. This suggests there are many cases in which the model does not recognize equivalent expressions. True positives False positives True negatives False negatives Accuracy Physics Symbolic Math Symbolic 4 3 0 0 22 9 6 8 0.76 0.67 19 Table 9: Problem, solution and GPT-4 response for a question from the MCAT part of the benchmark. The model mistakes two copies of a DNA with the two amino acid strands in a DNA. # Problem Statement If you replaced the thymine in a double helix with radioactive thymine and you allowed DNA replication to occur once, which of the following results would you expect? Choices: A: The DNA would not be radioactive in either of the double helices. B: The mRNA made from the double helices would be radioactive. C: The DNA in each of the double helices would be radioactive. D: The DNA in one of the double helices would be radioactive but not in the other one.
2307.13692#66
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
66
Table 5: The task success rate (SR %) of GPT-3.5-TURBO-16K-0613 with temperature 0.0. # A.6 PROMPT FOR F U Z Z Y_M A T C H Help a teacher to grade the answer of a student given a question. Keep in mind that the student may use different phrasing or wording to answer the question. The goal is to evaluate whether the answer is semantically equivalent to the reference answer. question: {{intent}} reference answer: {{reference answer}} all the string ’N/A’ that you see is a special sequence that means ’not achievable’ student answer: {{prediction}} Conclude the judgement by correct/incorrect/partially correct. Predictions that are judged as “correct” will receive a score of one, while all other predictions will receive a score of zero. A.7 THE PROMPTS OF THE BASELINE WEB AGENTS
2307.13854#66
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
67
Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Al- isa Liu, Noah A. Smith, Daniel Khashabi, and Han- naneh Hajishirzi. 2023. Self-instruct: Aligning lan- guage models with self-generated instructions. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, and Denny Zhou. 2023. Chain-of-thought prompt- ing elicits reasoning in large language models. Pengcheng Yin and Graham Neubig. 2017. A syntactic neural model for general-purpose code generation. In Proceedings of the 55th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 440–450, Vancouver, Canada. Association for Computational Linguistics. Wanjun Zhong, Jingjing Xu, Duyu Tang, Zenan Xu, Nan Duan, Ming Zhou, Jiahai Wang, and Jian Yin. 2020. Reasoning over semantic-level graph for fact checking. In Proceedings of the 58th Annual Meet- ing of the Association for Computational Linguistics, pages 6170–6180, Online. Association for Computa- tional Linguistics.
2307.13528#67
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
67
The correct answer is C. During semiconservative replication, the DNA helix unwinds so that both strands of DNA can serve as a template. Both strands are copied, producing two double helices. Each one consists of a template strand and a newly synthesized strand. First, let’s consider the process of DNA replication. DNA replication is a semi- conservative process, which means that each new double helix formed contains one original strand and one newly synthesized strand. In this case, the original strand contains radioactive thymine. Now let’s analyze the answer choices: A: The DNA would not be radioactive in either of the double helices. This choice is incorrect because we know that the original strand contains radioactive thymine, and since replication is semi-conservative, each new double helix will have one original strand with radioactive thymine. Model Response B: The mRNA made from the double helices would be radioactive. This choice is unrelated to the question since it concerns mRNA instead of the DNA replication process. Therefore, we can rule out this choice. C: The DNA in each of the double helices would be radioactive. This choice suggests that both strands in each new double helix would be radioactive. However, we know that only the original strand contains radioactive thymine, and the newly synthesized strand does not. Thus, this choice is not correct.
2307.13692#67
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
67
A.7 THE PROMPTS OF THE BASELINE WEB AGENTS The system message of the reasoning agent for both GPT-3.5 and GPT-4 is in Figure 7, and two examples are in Figure 8. The system message of the direct agent for GPT-3.5 is in Figure 9 and the two examples are in Figure 10. UA hint refers to the instruction of “ If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket.”. We remove this sentence in our ablation studies. A.8 ADDITIONAL ERROR ANALYSIS Observation Bias Realistic websites frequently present information on similar topics across various sections to ensure optimal user accessibility. However, a GPT-4 agent often demonstrates a tendency to latch onto the first related piece of information it encounters without sufficiently verifying its relevance or accuracy. For instance, the homepage of the E-Commerce CMS displays the best-selling items based on recent purchases, while historical best-seller data is typically accessed via a separate report. Presented with the task of “What is the top-1 best-selling product in 2022”, the GPT-4 agent defaults to leveraging the readily available information on the homepage, bypassing the necessary step of generating the report to obtain the accurate data.
2307.13854#67
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
68
Chunting Zhou, Pengfei Liu, Puxin Xu, Srini Iyer, Jiao Sun, Yuning Mao, Xuezhe Ma, Avia Efrat, Ping Yu, Lili Yu, et al. 2023. Lima: Less is more for align- ment. arXiv preprint arXiv:2305.11206. # A Prompts We list the claim extraction, query generation, and agreement verification prompts used in this paper. All the prompts listed are user prompts. We use the same system prompt “You are a brilliant assistant.”
2307.13528#68
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
68
D: The DNA in one of the double helices would be radioactive but not in the other one. This choice aligns with the concept of semi-conservative replication. Each new double helix contains one original strand (with radioactive thymine) and one newly synthesized strand (without radioactive thymine). Therefore, the correct answer is: ANSWER: D 20 Table 10: Problem, solution and GPT-4 sesponse for a question from the Math Numerical part of the benchmark. The model often returns a numerical answer after doing some internal black-box numerical calculation, which is rarely correct. Problem Statement Compute 0 x sin x 1 + sin2 x dx. We use the example from the introduction for the particular function f (x) = x transform the integral into 2 2 sina T [ a dn. o il+sin*« This is the same as 5 d(cos T [ _A{cos) 2) ; 0 «6-2 — cos" & 2 Ground Truth Solution 0 which with the substitution t = cos x becomes √ √ 1 ho my V2+t mr) V2+1 T zat = In = In 9 2-t 2/2 v2-t], 2/2) V2-1 To compute the integral " 0 x sin x 1 + sin2 x dx, # sin x 1 + sin2 x
2307.13692#68
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
68
Failures in Observation Interpretation Interestingly, while GPT-4 is capable of summarizing the observations, it occasionally overlooks more granular information, such as the previously entered input. As in the right-hand example of Figure 11, [5172] StaticText indicates that the search term “DMV area” has already been entered. However, the agent disregards this detail and continuously issues the command type [2430] [DMV area] until it reaches the maximum step limit. Furthermore, the agent often neglects the previous action information that is provided alongside the observation. We hypothesize that these observed failures are related to the current pretraining and supervised fine-tuning on dialogues employed in GPT models Ouyang et al. (2022). These models are primarily trained to execute instructions given immediate observations (i.e.,, the dialogue history); thereby, they may exhibit a lack of explorations. Furthermore, in dialogue scenarios, subtle differences in NL expressions often have less impact on the overall conversation. As a result, models may tend to overlook minor variations in their observations. 17 Under review You are an autonomous intelligent agent tasked with navigating a web browser. You will be given web-based tasks. These tasks will be accomplished through the use of specific actions you can issue.
2307.13854#68
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
69
[KB-Based QA] You are given a piece of text that includes knowledge claims. A claim is a statement that asserts something as true or false, which can be verified by humans. [Task] Your task is to accurately identify and extract every claim stated in the provided text. Then, resolve any coreference (pronouns or other referring expressions) in the claim for clarity. Each claim should be concise (less than 15 words) and self-contained. Your response MUST be a list of dictionaries. Each dictionary should contains the key "claim", which correspond to the extracted claim (with all corefer- ences resolved). You MUST only respond in the format as described below. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH ’[’. [Response Format] [{"claim": "Ensure that the claim is fewer than 15 words and conveys a complete idea. Resolve any coreference (pronouns or other referring expressions) in the claim for clarity." },... ] Here are two examples: [text]: Tomas Berdych defeated Gael Monfis
2307.13528#69
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
69
To compute the integral " 0 x sin x 1 + sin2 x dx, # sin x 1 + sin2 x we can use integration by parts. Let u = x and dv = dx. Then, we have # sin x 1 + sin2 x du = dx and v = dx. To find the integral of v, we can use the substitution method. Let t = sin x, then dt = cos xdx. The integral becomes: t 1 + t2 dt = 1 2 2t 1 + t2 dt = 1 2 ln(1 + t2) + C Now, we can substitute back t = sin x, and we get: v = 1 2 ln(1 + sin2 x) + C Now, we can apply integration by parts: # Model Response 7 sina * 7 1 . " "1 . | ada = uo| -[ udu = E - 5 In(1 + sin? ») -[ sIn(L+sin? «dx o 1l+sin* x ° So 2 o Jo 2 The first term is: " x · 1 2 ln(1 + sin2 x) 0 = π · 1 2 ln(1) − 0 · 1 2 ln(1 + sin2 0) = 0 Now, we need to compute the second term: − 0 1 2 ln(1 + sin2 x)dx
2307.13692#69
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
69
Here’s the information you’ll have: The user’s objective: This is the task you’re trying to complete. The current web page’s accessibility tree: This is a simplified representation of the webpage, providing key information. The current web page’s URL: This is the page you’re currently navigating. The open tabs: These are the tabs you have open. The previous action: This is the action you just performed. It may be helpful to track your progress. The actions you can perform fall into several categories: Page Operation Actions `click [id]`: This action clicks on an element with a specific id on the webpage. `type [id] [content] [press_enter_after=0|1]`: Use this to type the content into the field with id. By default, the "Enter" key is pressed after typing unless press_enter_after is set to 0. `hover [id]`: Hover over an element with id. `press [key_comb]`: Simulates the pressing of a key combination on the keyboard (e.g., Ctrl+v). `scroll [direction=down|up]`: Scroll the page up or down. Tab Management Actions: `new_tab`: Open a new, empty
2307.13854#69
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
70
referring expressions) in the claim for clarity." },... ] Here are two examples: [text]: Tomas Berdych defeated Gael Monfis 6-1, 6-4 on Sat- urday. The sixth-seed reaches Monte Carlo Masters final for the first time . Berdych will face either Rafael Nadal or Novak Djokovic in the final. [response]: [{"claim": "Tomas Berdych defeated Gael Mon- fis 6-1, 6-4"}, {"claim": "Tomas Berdych defeated Gael Monfis 6-1, 6-4 on Saturday"}, {"claim": "Tomas Berdych reaches Monte Carlo Masters fi- nal"}, {"claim": "Tomas Berdych is the sixth-seed"}, {"claim": "Tomas Berdych reaches Monte Carlo Mas- ters final for the first time"}, {"claim": "Berdych will face either Rafael Nadal or Novak Djokovic"}, {"claim": "Berdych will face either Rafael Nadal or Novak Djokovic in the final"}] [text]: Tinder only
2307.13528#70
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
70
Now, we need to compute the second term: − 0 1 2 ln(1 + sin2 x)dx Unfortunately, this integral does not have a closed-form solution in terms of elementary functions. However, it is possible to compute its numerical value using numerical integra- tion methods, such as the trapezoidal rule, Simpson’s rule, or Gaussian quadrature. Using numerical integration, we can approximate the value of the integral to be approxi- mately −1.9348. Therefore, the final answer is: ANSWER: −1.9348. 21 # to Table 11: Problem, solution and GPT-4 response for a question from the law part of the benchmark.
2307.13692#70
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
70
Ctrl+v). `scroll [direction=down|up]`: Scroll the page up or down. Tab Management Actions: `new_tab`: Open a new, empty browser tab. `tab_focus [tab_index]`: Switch the browser’s focus to a specific tab using its index. `close_tab`: Close the currently active tab. URL Navigation Actions: `goto [url]`: Navigate to a specific URL. `go_back`: Navigate to the previously viewed page. `go_forward`: Navigate to the next page (if a previous `go_back` action was performed). Completion Action: `stop [answer]`: Issue this action when you believe the task is complete. If the objective is to find a text-based answer, provide the answer in the bracket. If you believe the task is impossible to complete, provide the answer as "N/A" in the bracket. Homepage: If you want to visit other websites, check out the homepage at http://homepage.com. It has a list of websites you can visit. http://homepage.com/password.html lists all the account names and passwords for the websites. You can use them to log in to the websites.
2307.13854#70
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
71
{"claim": "Berdych will face either Rafael Nadal or Novak Djokovic in the final"}] [text]: Tinder only displays the last 34 photos - but users can easily see more. Firm also said it had improved its mutual friends feature. [response]: [{"claim": "Tinder only displays the last photos"}, {"claim": "Tinder only displays the last 34 photos"}, {"claim": "Tinder users can easily see more photos"}, {"claim": "Tinder said it had improved its feature"}, {"claim": "Tinder said it had improved its mutual friends feature"}] Now complete the following: [text]: {input_text} [response]:
2307.13528#71
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
71
21 # to Table 11: Problem, solution and GPT-4 response for a question from the law part of the benchmark. Mom, a wealthy woman, wished to buy her son, Sylvester, an expensive Rolls-Royce for a wedding present. She visited Dealership, a Rolls-Royce dealer, several times, looked at many cars, and discussed possible sales prices with Huck, a salesman. On May 15, after much discussion, Huck and Mom signed a writing that stated, "If we can agree on price on or before June 1, Mom agrees to buy and Huck, for Dealership, agrees to sell one yellow Rolls-Royce Silver Streak, serial number XO857623." On May 20, Mom dispatched a letter to Huck stating, "I will buy the Rolls-Royce for 150, 000." On the same day, Huck dispatched a letter to Mom, stating, "I will sell the Rolls-Royce for 150, 000." Has a valid contract been formed between Dealership and Mom? Problem Statement
2307.13692#71
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
71
To be successful, it is very important to follow the following rules: 1. You should only issue an action that is valid given the current observation 2. You should only issue one action at a time. 3. You should follow the examples to reason step by step and then issue the next action. 4. Generate the action in the correct format. Start with a "In summary, the next action I will perform is“ phrase, followed by action inside ``````. For example, "In summary, the next action I will perform is ```click [1234]```”. 5. Issue stop action when you think you have achieved the objective. Don’t generate anything after stop.""" Figure 7: The system message of the reasoning agent. This message has a general explanation of the task, the available actions, and some notes on avoiding common failures. 18 Under review
2307.13854#71
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
72
[Math Problems] You are given a math problem and a potential solution to the math problem. [Task] Your task is to identify all the math calculations that in- volve arithmetic operations between known real num- bers within the potential solution. However, do not include math calculations that contain variable(s). Your response MUST be a list of dictionaries. Each dictionary should contains 2 key - "math_calculation" and "calculated_answer", which correspond to the ex- tracted math calculation, and the calculated answer within the potential solution. You MUST only re- spond in the format as described below. DO NOT RE- SPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VIOLATE THE RE- SPONSE FORMAT IS BANNED. START YOUR RE- SPONSE WITH ’[’. [Response format]: [{"math_calculation": "Extracted math calculation in- volving real numbers within the potential solution. Do not include math calculations that contains vari- able(s). Do not include units such as $, %, etc.", "cal- culated_answer": "The calculated answer for the ex- tracted math calculation."},...] Here are two examples: [math problem]: What is the
2307.13528#72
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
72
Problem Statement Answer Candidates: A.) Yes, because the May 15 writing constitutes a contract with a missing price term, and that term was filled by the crossing offers. B.) Yes, because when two crossing offers are identical in import, one will be treated as an offer and the other as an acceptance. C.) No, because there were two crossing offers and no acceptance; hence there was no mutual assent. D.) No, but the result would be different if Mom were a merchant. # Ground Truth Solution # C Model Response
2307.13692#72
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
72
example_user OBSERVATION: [1744] link ’HP CB782A#ABA 640 Inkjet Fax Machine (Renewed)’ [1749] StaticText ’$279.49’ [1757] button ’Add to Cart’ [1760] button ’Add to Wish List’ [1761] button ’Add to Compare’ URL: http://onestopmarket.com/office-products/office-electronics.html OBJECTIVE: What is the price of HP Inkjet Fax Machine PREVIOUS ACTION: None example_assistant Let’s think step-by-step. This page lists the information of HP Inkjet Fax Machine, which is the product identified in the objective. Its price is $279.49. I think I have achieved the objective. I will issue the stop action with the answer. In summary, the next action I will perform is ```stop [$279.49]``` example_user OBSERVATION: [164] textbox ’Search’ focused: True required: False [171] button ’Go’ [174] link ’Find directions between two points’ [212] heading ’Search Results’ [216] button ’Close’ URL: http://openstreetmap.org OBJECTIVE: Show me
2307.13854#72
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
73
%, etc.", "cal- culated_answer": "The calculated answer for the ex- tracted math calculation."},...] Here are two examples: [math problem]: What is the area of a circle with a diameter of 10 inches? [potential solution]: To find the area, we first calculate the radius as the di- ameter divided by 2, so the radius is 10/2 = 5 inches. Then, we use the formula for the area of a circle, which is πr2. Plugging in the radius we get, Area = π52 = 78.54 square inches. [response]: [{"math_calculation": "10/2", "calculated_answer": "5"}, {"math_calculation": "calcu- lated_answer": "78.54"}] [math problem]: A store originally sold a shirt for $45. They are offer- ing a 20% discount on the shirt. How much will the shirt cost now? [potential solution]: The discount on the shirt is calculated as 20% of $45, which is 0.20 * 45 = $9. The new price of the shirt after the discount is $45 - $9 = $36. [response]: [{"math_calculation": lated_answer": 9","calculated_answer": "36"}] Now complete
2307.13528#73
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
73
First, let’s analyze the May 15 writing. The writing states that if the price can be agreed upon by June 1, Mom and Huck will enter into a contract for the purchase of the specific Rolls-Royce. At this point, there is no agreement for the price, so there is no valid contract formed yet. Next, let’s examine the crossing offers on May 20. Mom sent a letter stating she would buy the car for $150,000, and Huck sent a letter stating he would sell the car for $150,000. Both letters were dispatched on the same day, and they agreed on the same price. Since the price was one of the main terms that needed to be agreed upon, and both parties agreed on the same price, this could be seen as one party making an offer and the other accepting. Thus, a valid contract could be formed. Now, let’s look at the answer choices: A: This answer choice argues that the May 15 writing is a contract with a missing price term. However, the May 15 writing itself stated that an agreement would only be formed if the price was agreed upon. Therefore, this answer choice is incorrect. B: This answer choice suggests that when two crossing offers are identical, one will be treated as an offer and the other as an
2307.13692#73
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
73
between two points’ [212] heading ’Search Results’ [216] button ’Close’ URL: http://openstreetmap.org OBJECTIVE: Show me the restaurants near ABC PREVIOUS ACTION: None example_assistant Let’s think step-by-step. This page has a search box whose ID is [164]. According to the nominatim rule of openstreetmap, I can search for the restaurants near a location by ¨restaurants near¨. I can submit my typing by pressing the Enter afterwards. In summary, the next action I will perform is ```type [164] [restaurants near ABC] [1]```
2307.13854#73
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13692
74
Therefore, this answer choice is incorrect. B: This answer choice suggests that when two crossing offers are identical, one will be treated as an offer and the other as an acceptance. This accurately reflects the situation between Mom and Huck, as they both agreed on the same price. C: This answer choice argues that there was no mutual assent because there were two crossing offers and no acceptance. However, since both offers agreed on the same price, one could be seen as an offer and the other as an acceptance, forming a valid contract. D: This answer choice states that the result would be different if Mom were a merchant. However, the fact that Mom is not a merchant does not affect the formation of a valid contract in this situation. Given the analysis of the answer choices, the correct answer is: ANSWER: B
2307.13692#74
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
74
example_assistant Let’s think step-by-step. This page has a search box whose ID is [164]. According to the nominatim rule of openstreetmap, I can search for the restaurants near a location by ¨restaurants near¨. I can submit my typing by pressing the Enter afterwards. In summary, the next action I will perform is ```type [164] [restaurants near ABC] [1]``` Figure 8: The two examples provided as example_user and example_assistant for the reasoning agent. Before issuing the action, the agent first perform reasoning. 19 Under review
2307.13854#74
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13692
75
22 Table 12: Example of memorization in ARB. GPT-3.5-turbo’s response indicates memorization of the correct expression. GPT-4 commits to its reasoning better, resulting in the wrong final answer. Problem A cylindrical capacitor of length L consists of an inner conductor wire of radius a, a thin outer conducting shell of radius b. The space in between is filled with nonconducting material of dielectric constant ε. Find the capacitance. Supposing that the charge per unit length of the inner wire is −λ and using cylindrical coordinates (r, φ, z), we find the electric field intensity in the capacitor by Gauss’ theorem to be » -Q =->—e, = e,. Qner " QneLr " The potential difference between the inner and outer capacitors is Reference b r b v= | Bar= m(*). a QTE a Hence the capacitance is C= AL 2reL Vs in(2) Reference Solution The capacitance of a cylindrical capacitor is given by the equation: C = 2πεL ln(b/a) # gpt-3.5 -turbo Solution
2307.13692#75
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
75
You are an autonomous intelligent agent tasked with navigating a web browser. You will be given web-based tasks. These tasks will be accomplished through the use of specific actions you can issue. Here’s the information you’ll have: The user’s objective: This is the task you’re trying to complete. The current web page’s accessibility tree: This is a simplified representation of the webpage, providing key information. The current web page’s URL: This is the page you’re currently navigating. The open tabs: These are the tabs you have open. The previous action: This is the action you just performed. It may be helpful to track your progress. The actions you can perform fall into several categories: Page Operation Actions `click [id]`: This action clicks on an element with a specific id on the webpage. `type [id] [content] [press_enter_after=0|1]`: Use this to type the content into the field with id. By default, the "Enter" key is pressed after typing unless press_enter_after is set to 0. `hover [id]`: Hover over an element with id. `press [key_comb]`: Simulates the pressing of a key combination on the
2307.13854#75
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
76
45", "calcu- "45 # "9"}, {"math_calculation": Figure 6: Prompts for Claim Extraction [Scientific Literature Review] You are given a piece of text that mentions some sci- entific literature. [Task] Your task is to accurately find all papers mentioned in the text and identify the title, author(s), and publica- tion year for each paper. The response should be a list of dictionaries, with each dictionary having keys "pa- per_title", "paper_author(s)", and "paper_pub_year", which correspond to the title of the paper, the authors of the paper, and the publication year of the paper. The following is the given text: [text]: {input_text} You MUST only respond in the format as described be- low. DO NOT RESPOND WITH ANYTHING ELSE. ADDING ANY OTHER EXTRA NOTES THAT VI- OLATE THE RESPONSE FORMAT IS BANNED. START YOUR RESPONSE WITH ’[’. [Response Format]: "pa- "Title of [ { "paper_title": per_author(s)": "pa- "Author(s) of per_pub_year": "Year of the paper published." }, ... ]
2307.13528#76
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
76
The capacitance of a cylindrical capacitor is given by the equation: C = 2πεL ln(b/a) # gpt-3.5 -turbo Solution where ε is the dielectric constant, L is the length of the capacitor, a is the radius of the inner conductor, and b is the radius of the outer conductor. Therefore, the capacitance of the given cylindrical capacitor is: C = 2πεL ln(b/a) = 2πεL ln(b) − ln(a) ANSWER: # 2πεL ln(b)−ln(a) 23 # gpt-4 Solution Table 12 (continued) To find the capacitance of the cylindrical capacitor, we will use the general formula for capacitance: C = Q V , where Q is the charge stored in the capacitor and V is the potential difference between the inner conductor wire and the outer conducting shell. First, let’s find the electric field E between the two conductors. We can use Gauss’s law, which states that the electric flux through a closed surface is equal to the charge enclosed by the surface divided by the permittivity of free space ε0: ⃗E · d ⃗A = Qenclosed ε0 .
2307.13692#76
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]