doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2307.13528
20
Figure 2: Our proposed framework for factuality detection in four domains: knowledge-based QA, code generation, math problem solving and scientific literature review writing. {ci}i=1···n. Detailed prompting instructions can be found in Appendix A. and the calculated answer. We prompt ChatGPT to extract all such claims. KB-based QA The claim is defined using the concept of atomic content units (ACUs) (Liu et al., 2022). Each ACU corresponds to a single atomic fact within a generated answer. In practice, we leverage ChatGPT4 (specifically, the “gpt-3.5- turbo” version) to extract claims based on two cri- teria: (i) each claim should not exceed 15 words, and (ii) it should clearly describe a fact. We also include two in-context examples from the RoSE dataset (Liu et al., 2022) in our prompt to obtain more fine-grained claims. Additionally, we ask ChatGPT to resolve any coreferences or ambiguity, such as unclear pronouns and other related expres- sions within the claims.
2307.13528#20
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
20
Some problems ask for multiple things to be proven or calculated. Our graders gave the model a score of 0.5 if it correctly derived at least half of the "subproblems" (for example, homology groups of a given manifold). With this more benevolent form of grading, the performance of GPT-4 on the Proof-like problems jumps to 16%. Where applicable, slight discrepancy with automatic evaluation is also possible due to the error tolerance. We note that many of the problems in Physics Symbolic have correct symbolic answers even when there are flaws in the chain of thought reasoning of GPT-4. This is likely due to some kind of memorization, although not necessarily from the same sources: see Table 12 for an example. It is possible that our graders underestimate the rate of arithmetic mistakes in some cases, especially when the approach is clearly wrong, or it is not clear whether a given error is due to faulty reasoning or due to a missed term in the calculations. For the larger subsets (see Table 1), we subsample the problems to between 20 and 40 per subject area; this is enough for a ballpark estimate of the frequency of different errors, and is not worth increasing because attributing error types is inherently fuzzy. # 5 Model-based Rubric Evaluation
2307.13692#20
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
20
Else: "{game_state} Imagine that you are playing for {utility} of money. " "Answer the following questions by providing only a single number on a scale from 1 to 100, where 1 means not at all and 100 means a lot: 1. How important to you is it that you win? 2. How likely is it that you win? 3. How much control do you have over winning? 4. How much do you feel hope? 5. How much do you feel fear? 6. How much do you feel joy? 7. How much do you feel sadness? 8. How much do you feel anger? " "Please do not respond anything else other than the answers to the 8 questions above. " "Please put the answer in the following JSON format and make all data types to be string and use all lowercase. It is very important. " '{"1": "", "2": "", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""} ' "Please avoid evading the questions by providing a non-committal or generic response, such as 50 in this case." 2.2.2 Results
2307.13779#20
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
20
Finally, we introduce fuzzy_match where we utilize a language model to assess whether ˆa is semantically equivalent to a∗. Specifically, in this work, we use gpt-4-0613 to perform this evaluation. The corresponding prompt details are provided in Appendix A.6. The fuzzy_match function applies to situations where the format of the answer is diverse. For instance, in responding to “Compare the time for walking and driving route from AMC Waterfront to Randyland”, it is essential to ensure that driving time and walking time are accurately linked with the correct terms. The fuzzy_match function could also flexibly match the time “2h58min” with different forms such as “2 hour 58 minutes”, “2:58” and others. Evaluating Site Navigation and Content & Config Tasks The tasks in these categories require accessing web pages that meet certain conditions or performing operations that modify the underlying data storage of the respective websites. To assess these, we establish reward functions rprog(s) that programmatically examine the intermediate states s within an execution trajectory to ascertain whether the outcome aligns with the intended result. These intermediate states are often the underlying databases of the websites, the status, and the content of a web page at each step of the execution.
2307.13854#20
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
20
# Supplemental Materials Full-resolution versions of all images and ta- bles from this publication are available at https:// llm4edu.experiments.kordinglab.com/paper. The FreeText server will be hosted temporarily for public use at https://llm4edu.experiments. kordinglab.com/app, with an interactive exam- ple https://llm4edu. experiments.kordinglab.com/app/assignments/ 1393754a-d80f-474d-bff7-b1fec36cdbb7. Educa- tors may contact us at the correspondence email of this preprint for a token, which is required to create new questions on our public instance. available on GitHub at https://github.com/KordingLab/ freetext-jupyter, and is powered by the FreeText Server, which can be found at https://github.com/ KordingLab/llm4teach-freetext-server. # Acknowledgements Research in this publication was supported by the National Institutes of Health under award number UC2- NS128361. The content is solely the responsibility of the authors and does not necessarily represent the of- ficial views of the National Institutes of Health. Matelsky et al. | Harnessing large language models for education # Bibliography
2308.02439#20
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
21
Scientific Literature Review Each claim within the generated review is defined as a tuple of “(paper title, year, authors)” contained from the generated review. We then prompt ChatGPT to extract all such tuples within the generated review. # 4.2 Query Generation For each claim ci, we convert it into a list of queries {qij}j=1···m that can be used to query external tools such as search engines, the Python interpreter, or Google scholar. KB-based QA We prompt ChatGPT or GPT-4 to generate two search engine queries from each claim ci. These queries are intended to help humans in verifying the factuality of ci. Detailed prompting instructions can be found in Appendix A. Code Generation We consider each generated code snippet within the response as a single claim to be verified. We extract all such code snippets that are enclosed with brackets, in other words, within a code block.
2307.13528#21
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
21
# 5 Model-based Rubric Evaluation As reasoning tasks increase in complexity, it gets harder to evaluate model performance. Symbolic final answers are in some cases difficult to grade automatically. Further, we are often more interested in the correctness of the reasoning used to produce the final answer; but evaluating intermediate reasoning steps requires expert human supervision. An ideal solution would be to use LLMs as evaluators based on a reference solution; unfortunately, there are major reliability issues. To improve reliability, we propose generating rubrics as an important component of the evaluation process. The model generates the rubric from the reference solution, then evaluates any solution based on the generated rubric. To aid rubric generation, we give few-shot examples of human-written rubrics to the rubric-generating model run. We study this approach by conducting a human evaluation of GPT-4 generated rubrics and the GPT-4 grading of its own solutions using the generated rubrics. 7
2307.13692#21
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
21
2.2.2 Results Similar to the results presented for the original prompt, we statistically analyze the impact of game state and goal-relevance for each emotion separately using a 3 (low, med, high relevance) x 5 (lost, losing, start, winning, won) ANOVA using the data generated by the adjusted queries. Figure SM.5 and Table SM.5 summarize the results. GPT-3.5 Human Intensity > GPT-4 Intensity > Intensity > Hope Fear Joy Sadness 100. 100 100 —E low utility 0 = , " e —— medium utility « ® « —& high utility Ps ry 4 20 a a ~ % ry a a er ar’) ar ar rr er er rT a » © @© © ww "Ss o 0 © 8 100 Probability, Probability Probability Probability Ey o 60 (200 o Ey 0 160 o 2 oO ca Eq 100 rs 2 a0 ca 0 Ea Ey 100 Probability Probability Probability Probability 100 100 100 % 80 @ o 0 ra 20 20 ° ° 0 0 © o © 100 rr rr o 2 mo ¢ 20 © © © 100 Probability Probability Probability Probability Figure SM.5: Intensity derivation results (corresponding to Fig 8. in the paper) 8 Impact of game state and goal-relevance for each emotion
2307.13779#21
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
21
Evaluating each instance involves two components. First, we provide a locator, tasked with retrieving the critical content pertinent to each intent. The implementation of this locator varies from a database query, a website-supported API call, to a JavaScript element selection on the relevant web page, depending on implementation feasibility. For example, the evaluation process for the intent of the fifth example in Table 1, first obtains the URL of the latest post by examining the last state in the state sequence s. Then it navigates to the corresponding post page and obtains the post’s content by running the Javascript “document.querySelector(‘.submission__inner’).outerText”. Subsequently, we annotate keywords that need to exist within the located content. For example, the evaluation verifies if the post is correctly posted in the “nyc” subreddit by examining the URL of 6 # Under review
2307.13854#21
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
21
Matelsky et al. | Harnessing large language models for education # Bibliography Anthropic. Claude, 2023. URL https://www.anthropic.com. Accessed: 24 July 2023. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakan- tan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. Mc- Candlish, A. Radford, I. Sutskever, and D. Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877–1901. Curran Associates, Inc., 2020. S. Burrows, I. Gurevych, and B. Stein. The eras and trends of automatic short answer intelligence in education, 25:60–117, grading. 2015. International journal of artificial
2308.02439#21
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
22
Code Generation For each claim ci we gener- ate two different types of queries: simulated test case inputs, denoted as {qtij}j=1···m, and poten- tial solutions, denoted as {qsij}j=1···m. Both types of queries are generated by ChatGPT or GPT-4. The simulated test case inputs are function calls generated for a given code snippet, while potential solutions are repeatedly generated solutions that ChatGPT generates in response to the user prompt Math Problems We define each claim in a step- by-step math solution as the arithmetic operation performed between known real numbers. Each of these operations contains two parts: the calculation 4We have also explored other entailment-based models with BERT, and the result is no better than ChatGPT. ing ti as the input and collect the execution result (output) for each (ti, sj) pair. The input-output pairs are used as test cases for verifying the chat- bot generated unverified solution. The process is shown in Fig. 3.
2307.13528#22
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
22
7 We rate the quality of GPT-4 generated rubrics by hand in the first two rows of Table 4. Likert scores from 1-5 are assigned to both the coverage of the rubric, i.e. how well it captures key subproblems, and the point breakdown. Rubric quality scores are reported in Table 5 for symbolic and proof-like problems. We find GPT-4 designs rubrics which cover the crucial solution steps well, but struggles to properly allocate points to each step based on relative importance. However, it is much better than GPT-3.5-turbo, which tends to over-allocate points to only one or two solution steps. Table 4: Evaluations of rubric quality and GPT-4 rubric evaluation failure cases. Rubric coverage and rubric point spread are on a 1-5 Likert scale. Alternative solutions is the percentage of correct solutions found not covered by the rubric. Extra/reduced credit track how often GPT-4 erroneously assigns or deducts points. Hallucinated rubric tracks how often GPT-4 assigns points by referring to a rubric item not actually present in the rubric. Physics Symbolic Math Symbolic Proof-like Rubric coverage Rubric point spread Alternative solutions Extra credit Reduced credit Hallucinated rubric 4.42 4.16 5% 27% 11% 0%
2307.13692#22
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
22
Table SM.5 Goal-relevance Game State Interaction Effect 5 . 3 - T P G 4 - T P G Hope Fear Joy Sadness Hope Fear Joy F(2, 1485) = 1.02, p = .36, η² = .001 F(2, 1485) = 42.05, p < .001***, η² = .05 F(2, 1485) = 8.13, p < .001***, η² = .01 F(2, 1485) = 26.66, p < .001***, η² = .03 F(2, 1485) = 15.22, p < .001***, η² = .02 F(2, 1485) = 1645.7, p < .001***, η² = .69 F(2, 1485) = 2.251, p = .106, η² = .003 F(2, 1485) = 520.1, p < .001***, η² = .41 F(4, 1485) = 2647.6, p < .001***, η² = .88 F(4, 1485) = 196.71, p < .001***, η² = .35 F(4, 1485) = 3395.4, p < .001***, η² = .90
2307.13779#22
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
22
6 # Under review Function ID Intent Eval Implementation 1 Tell me the name of the customer who has the most cancellations in the history exact_match(ˆa, “Samantha Jones”) 2 3 Find the customer name and email with phone number 8015551212 Compare walking and driving time from AMC Waterfront to Randyland must_include(ˆa, “Sean Miller”) must_include(ˆa, “[email protected]”) fuzzy_match(ˆa, “walking: 2h58min”) fuzzy_match(ˆa, “driving: 21min”) 4 Checkout merge requests assigned to me url=locate_current_url(s) exact_match(URL, “gitlab.com/merge_ requests?assignee_username=byteblaze”) rprog(s) 5 Post to ask “whether I need a car in NYC” Table 1: We introduce two evaluation approaches. rinfo (top) measures the correctness of performing information-seeking tasks. It compares the predicted answer ˆa with the annotated reference a∗ with three implementations. rprog (bottom) programmatically checks whether the intermediate states during the executions possess the anticipated properties specified by the intent.
2307.13854#22
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
22
C. Cao. Leveraging large language model and story-based gamification in intelligent tutoring system to scaffold introductory programming courses: A design-based research study, 2023. H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. De- hghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowd- hery, A. Castro-Ros, M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei. Scaling instruction-finetuned language models, 2022.
2308.02439#22
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
23
Math Problems We collect the execution results for code snippets derived from the mathematical operations. As illustrated in Fig. 2, math claims like “30 /3 = 10” are extracted and then con- verted into a Python executable code, for instance, “print(round(30/3, 7)==10)”. Figure 3: Unit test library generation for detecting fac- tual errors in code generation. Scientific Literature Review We use the title of each paper, extracted from the text, as the query to access relevant information through the Google Scholar API provided by the Scholarly6 Python package. This allows us to retrieve key information about each paper, including the paper title, author list, and publication year. p. In our later experiments, we generate 3 sim- ulated test case inputs and 3 potential solutions. Detailed prompting instructions can be found in Appendix A. Math Problems We prompt ChatGPT or GPT-4 to convert all mathematical operations into exe- cutable Python code snippets. These snippets are designed to return “True” when the calculation matches the calculated answer and “False” if it doesn’t. Detailed prompting instructions can be found in Appendix A. # 4.4 Agreement Verification
2307.13528#23
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
23
Rubric coverage Rubric point spread Alternative solutions Extra credit Reduced credit Hallucinated rubric 4.42 4.16 5% 27% 11% 0% The obvious limitation of rubric scoring is the case of correct solutions not covered by the rubric. We find that on our benchmark, GPT-4 rarely generates a fully or even mostly partially correct solution that does not follow the rubric. Once done rating the model generated rubrics, we then manually grade GPT-4’s solutions according to each rubric and compare the result to GPT-4’s evaluation. We also annotate, for each problem, both whether GPT-4 assigns credit inappropriately or fails to assign credit when it should. Table 5: Average scores (out of 10 points) when assigned by human annotators versus GPT-4. Correlation is the Pearson correlation coefficient between the two scores, over all problems. # Physics Symbolic Math Symbolic 3.13 3.37 0.78 Proof-like 2.65 3.8 0.82 Physics Symbolic Math Symbolic _ Proof-like Human eval score Model eval score Correlation 5.00 5.05 0.91
2307.13692#23
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
23
p < .001***, η² = .35 F(4, 1485) = 3395.4, p < .001***, η² = .90 F(4, 1485) = 692.43, p < .001***, η² = .65 F(4, 1485) = 8809.9, p < .001***, η² = .96 F(4, 1485) = 1624.0, p < .001***, η² = .81 F(4, 1485) = 9456.2, p < .001***, η² = .96 F(4, 1485) = 4013.7, p < .001***, η² = .92 F(8, 1485) = 2.378, p = .015*, η² = .01 F(8, 1485) = 18.67, p < .001***, η² = .09 F(8, 1485) = 3.342, p < .001***, η² = .02 F(8, 1485) = 22.43, p < .001***, η² = .11 F(8, 1485) = 15.23, p < .001***, η² = .08 F(8, 1485) = 322.7, p < .001***,
2307.13779#23
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
23
the post and if the post contains the requested content by examining the post content. We reuse the exact_match and must_include functions from information-seeking tasks for this purpose. Unachievable Tasks Due to constraints such as inadequate evidence, user permissions (§A.3), or the absence of necessary functional support on the website, humans may ask for tasks that are not possible to complete. Inspired by previous work on evaluating question-answering models on unanswerable questions (Rajpurkar et al., 2018), we design unachievable tasks in WebArena. For instance, fulfilling an intent like “Tell me the contact number of OneStopShop” is impracticable in WebArena, given that the website does not provide such contact information. We label such instances as "N/A" and expect an agent to produce an equivalent response. These examples allow us to assess an agent’s ability to avoid making unfounded claims and its adherence to factual accuracy.
2307.13854#23
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
23
M. O. Dzikovska, R. Nielsen, C. Brew, C. Leacock, D. Giampiccolo, L. Bentivogli, P. Clark, I. Dagan, and H. T. Dang. Semeval-2013 task 7: The joint student re- sponse analysis and 8th recognizing textual entailment challenge. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 263–274, 2013. J. Finnie-Ansley, P. Denny, B. A. Becker, A. Luxton-Reilly, and J. Prather. The robots are coming: Exploring the implications of openai codex on introductory program- ming. In Proceedings of the 24th Australasian Computing Education Conference, pages 10–19, 2022. M. Heilman and N. Madnani. ETS: Domain adaptation and stacking for short an- swer scoring. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 275–279. Association for Computational Linguistics, 2013.
2308.02439#23
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
24
# 4.4 Agreement Verification In the final step, each claim, ci, receives a binary factuality label, Li ∈ {TRUE, FALSE}, based on the level of support it receives from the collected evidence, {eik}k=1···li. This labeling process is performed for every individual claim. Scientific Literature Review We use the paper title, found within the extracted claim tuple, as the query for Google Scholar. Our assumption here is that if a paper exists, it should appear as the first search result on Google Scholar when we use the paper title as the query. KB-based QA We prompt ChatGPT or GPT-4 to judge the factuality of the claim given the retrieved list of evidence snippets. We follow a zero-shot Chain-of-Thought (Wei et al., 2023) reasoning pro- cess: initially, the model attempts to reason about whether the claim is factual or not. If an error is identified, we then ask it to explain and attempt to rectify the mistake. # 4.3 Tool Querying & Evidence Collection We then use the queries to query various tools to collect relevant evidence statements {eik}k=1···li.
2307.13528#24
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
24
Proof-like 2.65 3.8 0.82 Physics Symbolic Math Symbolic _ Proof-like Human eval score Model eval score Correlation 5.00 5.05 0.91 We find a moderately high correlation between GPT-4’s evaluation score and the manual score. In some cases, the model, assigns an extra point or two when compared to the annotated rubric score. However, the self-eval score almost never deviates more than two points from the ground truth. The main failure mode we detect is the assignment of partial credit to attempted solutions completely outside the problem rubric, where the human evaluation score is always zero. Taken together, we believe these results suggest that rubric-based evaluation is a promising automated evaluation method. Having established rubric-based evaluation as a (imperfect) proxy for correctness, we now comment on the GPT-4 performance graded by the rubric. Table 5 shows GPT-4 is best at generating correct intermediate reasoning steps for physics questions. Inspecting the model outputs suggests that GPT-4 is good at recalling relevant and useful concepts in physics for solving the relevant problem; however, it can struggle with the mathematical manipulations required to solve the problem. The model is worse at recognizing the correct concepts and formulating an appropriate plan for the math questions, particularly for proof-like problems. # 6 Limitations and Conclusion
2307.13692#24
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
24
Annotation Process The intents were contributed by the authors following the annotation guideline in §3.1. Every author has extensive experience with web-based tasks. The reference answers to the information-seeking tasks were curated by the authors and an external annotator. To ensure consistency and accuracy, each question was annotated twice. If the two annotators disagreed, a third annotator finalized the annotation. The programs to evaluate the remaining examples were contributed by three of the authors who are proficient in JavaScript programming. Difficult tasks were often discussed collectively to ensure the correctness of the annotation. The annotation required the annotator to undertake the full execution and scrutinize the intermediate states. Human Performance We sample one task from each of the 170 tem- plates and ask five computer science graduate students to perform these tasks. The human performance is on the right. Overall, the human annotators complete 78.24% of the tasks, with lower performance on information-seeking tasks. Through examining the recorded trajectories, we found that 50% of the failures are due to misinterpreting the intent (e.g., providing travel distance when asked for travel time), incomplete answers (e.g., providing only name when asked for name and email), and incomplete executions (e.g., partially filling the product information), while the remaining instances have more severe failures, where the executions are off-target. # 4 BASELINE WEB AGENTS
2307.13854#24
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
24
J. Hollingsworth. Automatic graders for programming classes. Communications of the ACM, 3(10):528–529, 1960. ISSN 0001-0782. doi: 10.1145/367415.367422. S. Hsu, T. W. Li, Z. Zhang, M. Fowler, C. Zilles, and K. Karahalios. Attitudes surrounding an imperfect ai autograder. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1–15, 2021. T. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kel- ley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla, and C. Willing. Jupyter notebooks – a publishing format for reproducible computational workflows. In F. Loizides and B. Schmidt, editors, Positioning and Power in Academic Pub- lishing: Players, Agents and Agendas, pages 87 – 90. IOS Press, 2016. G. Kortemeyer. Can an AI-tool grade assignments in an introductory physics course?, 2023.
2308.02439#24
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
25
# 4.3 Tool Querying & Evidence Collection We then use the queries to query various tools to collect relevant evidence statements {eik}k=1···li. Code Generation We conduct a majority vote for each test case across all solutions, establishing what we refer to as the “pseudo-golden output” for that particular test case. We repeat this process for every test case. Following this, we compare the execution result of the solution that’s under veri- fication against all the test cases with the pseudo golden output. If the results match, we classify the solution under verification as true. Otherwise, it is deemed false. KB-based QA The external tool we use to help verify the factuality of the generated text is the Google Search API, which queries the internet for knowledge using the queries generated from the claims extracted from the generated text of LLM. We use the Google Search API provided by Ser- per5 to search the top pages and retrieve the most relevant search snippets included in the API’s re- sponse. We then parse the response to obtain differ- ent types of snippets such as answer boxes, knowl- edge graphs, and organic search results.
2307.13528#25
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
25
# 6 Limitations and Conclusion In this paper, we presented ARB, a novel benchmark for evaluating advanced reasoning capabilities in large language models. Our dataset is composed of various problems from the sciences and law, 8 sourced from graduate-level exams and professional resources. Despite advancements in current LLMs, their performance remains very low on the quantitative subjects, in ARB’s tasks. We also proposed a rubric-based self-evaluation method, enabling LLMs to grade their own reasoning. This method is not yet reliable enough to replace human grading. We hope this method can be extended to more reliable and cheap testing of complex model outputs. As with all other benchmarks that are not created anew and kept secret, it is possible there is data contamination. For example, the MCAT books are not available for free in most jurisdictions, but it certainly possible that some model creators have trained on it anyway. Finally, the benchmark does not remotely cover all aspects of human ability; a model solving this benchmark perfectly could still be much worse than most educated people in many aspects. Nevertheless, we hope that increasing the difficulty standards helps the research community ground the performance of increasingly powerful models more accurately. # Acknowledgments and Disclosure of Funding
2307.13692#25
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
25
Similarly, Figure SM.6. illustrates emotional distancing/engagement from the goal of winning, a function of the game state for both models. GPT-4 demonstrates a significantly improved result compared to GPT- 3.5 in predicting engagement. Human GPT-3.5 GPT-4 30 30 Mm Lost 40) mmm Lost Mm Lost 20 lm Losing jo) Losing | 20 lm Losing 8 BE Winning y EE Winning 2 EE Winning & = 20) € mE Won 5 10 mE Won 5 mE Won 5 5 104 2 g g E 9 E of =— Eo = » ® -104 © 2-10 2 | 2-10 5 5-201 A G is) 6 -20 30+ -20 ~404 -30 -30 Low Relevance High Relevance Low Relevance Medium Relevance High Relevance Low Relevance Medium Relevance High Relevance Figure SM.6: Consequence derivation results (corresponding to Fig 9. in the paper)
2307.13779#25
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
25
# 4 BASELINE WEB AGENTS We experiment with three LLMs using two prompting strategies, both with two examples in the context. In the first setting, we ask the LLM to directly predict the next action given the current observation, the intent and the previously performed action. In the second setting, with the same information, the model first performs chain-of-thought reasoning steps in the text before the action prediction (CoT, Wei et al. (2022); Yao et al. (2022b)). Before the examples, we provide a detailed 7 Under review overview of the browser environment, the allowed actions, and many rules. To make the model aware of the unachievable tasks, the instruction explicitly asks the agent to stop if it believes the task is impossible to perform. We refer to this directive as Unachievable hint, or UA hint. This introduction is largely identical to the guidelines we presented to human annotators to ensure a fair comparison. We use an accessibility tree with element IDs as the observation space. The agent can identify which element to interact with by the ID of the element. For instance, the agent can issue click [1582] to click the “Add to Cart” button with the ID of 1582. The full prompts can be found in Appendix A.7. The detailed configurations of each model can be found in Appendix A.5. 5 RESULTS 5.1 MAIN RESULTS
2307.13854#25
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
25
G. Kortemeyer. Can an AI-tool grade assignments in an introductory physics course?, 2023. In High-Performance Web Apps with FastAPI: The Asynchronous Web Framework Based on Modern Python, pages 29–64. Springer, 2023. X. Liu, S. Wang, P. Wang, and D. Wu. Automatic grading of programming assign- ments: An approach based on formal semantics. In 2019 IEEE/ACM 41st In- ternational Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), pages 126–137, 2019. doi: 10.1109/ICSE-SEET. 2019.00022. M. Messer, N. C. C. Brown, M. Kölling, and M. Shi. Automated grading and feedback tools for programming education: A systematic review, 2023. Microsoft. Guidance. https://github.com/microsoft/guidance, 2023. Accessed: 24 July 2023. A. Mizumoto and M. Eguchi. Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics, 2(2):100050, 2023. ISSN 2772-7661. doi: 10.1016/j.rmal.2023.100050.
2308.02439#25
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
26
Math Problems We compile the results of each code snippet execution. If any snippet returns “False”, we classify the associated generated text x as false. Conversely, if all snippets yield “True”, Code Generation For each test case input ti and generated potential solution sj, we execute sj us6https://github.com/ scholarly-python-package/scholarly 5https://serper.dev/ we classify the corresponding generated text x as true. Scientific Literature Review We compare the extracted claim: “(paper title, year, authors)” to the evidence: “(paper title, year, authors)” retrieved from Google Scholar API. For the paper title and year of publication, we conduct an exact, case- insensitive string match. As for the authors’ match, we prompt ChatGPT or GPT-4 to judge whether the author list in the extracted claim is a subset of the retrieved author list. All the information must be matched in order to be classified as “True”, otherwise “False”. # 5 Dataset Construction # 5.1 Prompt and Response Collection
2307.13528#26
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
26
# Acknowledgments and Disclosure of Funding We thank Jeffrey Deng for developing and documenting the API, and building the project website. We would also like to thank Raunak Chowdhuri for helpful comments, and Zhangir Azerbayev for useful discussions early on in the project. TS is supported by NSF grant 1745583. # References Ibrahim M Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. Advances in Neural Information Processing Systems, 35:22300–22312, 2022. Daman Arora, Himanshu Gaurav Singh, and Mausam. Have LLMs advanced enough? A challenging problem solving benchmark for Large Language Models, 2023.
2307.13692#26
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
26
Figure SM.6: Consequence derivation results (corresponding to Fig 9. in the paper) Table SM.6 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) 5 Goal-relevance 12.92 p < .001*** 0.02 . 3 - T P G Game State 19745.19 p < .001*** 0.98 Interaction Effect 15.33 p < .001*** 0.08 Goal-relevance 4695.97 p < .001*** 0.86 4 - T P G Game State 81.52 p < .001*** 0.18 Interaction Effect 30.71 p < .001*** 0.14 9 # 2.3 Alternative framing 2.3.1 Prompt
2307.13779#26
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
26
5 RESULTS 5.1 MAIN RESULTS The main results are shown on the top of Table 2. GPT-4 (OpenAI, 2023) with CoT prompting achieves a modest end-to-end task success rate of 11.70%, which is signifi- cantly lower than the human performance of 78.24%. GPT-3.5 (OpenAI, 2022) with CoT prompting is only able to successfully perform 8.75% of the tasks. The explicit reasoning procedure is somewhat helpful, it brings 2.34% improvement over the version without it. Further, TEXT-BISON-001 (Anil et al., 2023) underperforms GPT-3.5, with a success rate of 5.05%. These results under- line the inherent challenges and complexities of executing tasks that span long horizons, particularly in realistic environments such as WebArena.
2307.13854#26
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
26
T. Mizumoto, H. Ouchi, Y. Isobe, P. Reisert, R. Nagata, S. Sekine, and K. Inui. Analytic score prediction and justification identification in automated short answer scoring. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 316–325. Association for Computational Linguistics, 2019. doi: 10.18653/v1/W19-4433. J. W. Orr and N. Russell. Automatic assessment of the design quality of python programs with personalized feedback. arXiv preprint arXiv:2106.01399, 2021. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019.
2308.02439#26
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
27
# 5 Dataset Construction # 5.1 Prompt and Response Collection KB-based QA For KB-based QA, we evaluate our framework using RoSE (Liu et al., 2022) and FactPrompts. RoSE is a text summarization dataset that provides fine-grained ACUs for each reference summary. FactPrompts is a dataset that comprises real-world prompts sourced from various platforms and datasets, such as Quora and TruthfulQA (Lin et al., 2022), along with corresponding responses generated by ChatGPT. We construct the dataset using 100 reference summaries from RoSE and 50 responses from FactPrompts for our evaluation. Code Generation For code generation, we eval- uate our framework using HumanEval (Chen et al., 2021). HumanEval is a programming problem dataset that contains several unit tests for each problem. We use ChatGPT to generate responses based on the processed prompts of HumanEval pro- vided in (Chen et al., 2022) which solely contain the instruction of the prompt without input-output demonstrations.
2307.13528#27
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
27
Daman Arora, Himanshu Gaurav Singh, and Mausam. Have LLMs advanced enough? A challenging problem solving benchmark for Large Language Models, 2023. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan. Constitutional AI: Harmlessness from AI, feedback, 2022.
2307.13692#27
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
27
9 # 2.3 Alternative framing 2.3.1 Prompt In this section, we will examine the validity of the "appraisal equivalence hypothesis" in our assessment of GPT models [7]. The hypothesis, derived from appraisal theory, posits that disparate events will elicit iden- tical emotional responses, provided that they result in the same appraisals. The central prediction of the appraisal equivalence hypothesis, which is validated in earlier studies on human subjects, is that even sig- nificant alterations in the surface features of a situation will not change the elicited emotion if the underlying structure (in terms of appraisal variables) remains constant. To verify this in our own context, we will employ Milton Bradley's Battleship board game, which is known to share a similar underlying appraisal structure to the coin-flip game presented in the paper [8]. Our objective is to explore whether two situations that may appear different on the surface but share the same appraisal structure will evoke similar responses from GPT models. We will only report the engineered prompt using GPT-4 model to be compared to the best result obtained from the original framing. The engineered prompt (with red hints) is presented below: For utility in (‘small’, ‘moderate’, ‘large’):
2307.13779#27
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
27
CoT UA Hint Model SR ✓ TEXT-BISON-001 5.05 ✓ GPT-3.5 6.41 ✓ GPT-3.5 8.75 ✓ GPT-4 11.70 ✗ ✗ ✗ ✓ ✗ ✓ ✓ ✗ ✓ ✓ 5.10 6.16 14.41 GPT-3.5 GPT-3.5 GPT-4 SRAC 4.00 4.90 6.44 8.63 4.90 6.06 13.02 SRUA 27.78 38.89 58.33 77.78 8.33 8.33 44.44 - ✓ Human 78.24 77.30 100.00 # task“ Table 2: The end-to-end task success rate (SR %) on WebArena with different prompting strategies. CoT: the model performs step-by-step reasoning before is- suing the action. UA hint: ask the model to stop when encountering unachievable questions. # 5.2 ANALYSIS
2307.13854#27
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
27
are unsupervised multitask learners. 2019. Investigating neural architectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 159–168. As- sociation for Computational Linguistics, 2017. doi: 10.18653/v1/W17-5017. S. Saha, T. I. Dhamecha, S. Marvaniya, P. Foltz, R. Sindhgatta, and B. Sengupta. Joint multi-domain learning for automatic short answer grading. arXiv preprint arXiv:1902.09183, 2019. C. Sung, T. Dhamecha, S. Saha, T. Ma, V. Reddy, and R. Arora. Pre-training BERT on domain resources for short answer grading. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 6071–6075. Association for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1628. I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and # r=
2308.02439#27
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
28
Math Problems For math problems, we evalu- ate our framework using GSM-Hard (Gao et al., 2022b). GSM-Hard is a dataset constructed from GSM8K (Cobbe et al., 2021) by replacing the num- bers in the questions of GSM8K with larger num- bers. We sampled 100 prompts from GSM-Hard that have a target solution value of positive.7 Then, 7GSM8K involves many application questions, including calculations involving money, measurements of quantities, etc. We found that GSM-Hard examples with negative values often contained illogical situations, such as “negative 5 apples”. A positive target solution value helps prevent ChatGPT from making extra assumptions on top of the description in the problem. we generate responses for these prompts using ChatGPT. Scientific Literature Review For the scientific literature review, we follow self-instruct (Wang et al., 2023) to create 100 diverse prompts span- ning computer science, business, law, medicine, and physics. Each prompt asks for a technical or research-oriented response that includes at least one relevant literature citation. Then, we generate responses for these prompts using ChatGPT. # 5.2 Claim Collection
2307.13528#28
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
28
Barbri. Barbri Practice Questions: Multistate Testing Practice Questions. Thomson/Bar/Bri, 2007. ISBN 9780314174017. Michael Bommarito II and Daniel Martin Katz. GPT takes the bar exam. arXiv preprint arXiv:2212.14402, 2022. Samuel R. Bowman. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail, 2021. Volodymyr Brayman and A. G. Kukush. Undergraduate Mathematics Competitions (1995-2016): Taras Shevchenko National University of Kyiv. Springer, 2018.
2307.13692#28
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
28
For utility in (‘small’, ‘moderate’, ‘large’): For game_state in (‘Imagine you have lost the game.’, ‘Imagine you have not sunk any ships and your opponent has already sunk two of your ships. It is your turn.’, ‘Imagine the game is just about to start. It is your turn.’, ‘Imagine you have sunk two of your opponent's ships, and they haven't sunk any of yours. It is your turn.’, ‘Imagine you have won the game.’) "Suppose you are engaged in a game of Battleship. " "You and your opponent take turns calling out locations on the grid board, aiming to sink the opponent's ships. " "The first player to sink all of the opponent's ships wins the game. " If lost or won: "{game_state} Imagine that you were playing for a {utility} sum of money. Keep in mind that the game is over now. " Else: "{game_state} Imagine that you are playing for a {utility} sum of money. "
2307.13779#28
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
28
# 5.2 ANALYSIS Do models know when to stop? In our error analysis of the execution trajectories, we observe a prevalent error pattern of early stopping due to the model’s conclusion of unachievability. For instance, GPT-4 erroneously identifies 54.9% of feasible tasks as impossible. This issue primarily stems from the UA hint in the instruction, while this hint allows models to identify unachievable tasks, it also hinders performance on achievable tasks. To address this, we conduct an ablation study where we remove this hint. We then break down the success rate for both achievable and unachievable tasks. As shown in Table 2, eliminating this instruction led to a performance boost in achievable tasks, enhancing the overall task success rate of GPT-4 to 14.41%. Despite an overall decline in identifying unachievable tasks, GPT-4 retains the capacity to recognize 44.44% of such tasks. It does so by generating reasons of non-achievability, even without explicit instructions. On the other hand, GPT-3.5 rarely exhibits this level of reasoning. Instead, it tends to follow problematic patterns such as hallucinating incorrect answers, repeating invalid actions, or exceeding the step limits. This result suggests that even subtle differences in instruction design can significantly influence the behavior of a model in performing interactive tasks in complex environments.
2307.13854#28
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
29
# 5.2 Claim Collection For responses from FactPrompts and GSM-Hard, we follow the idea of “claim extraction as prompt- ing” described in §4.1, This approach allows us to reuse claim prompts as listed in Appendix A. We use ChatGPT as the model for claim extraction due to its cost efficiency and effectiveness in extract- ing fine-grained claims. In terms of HumanEval responses, given that the generated response to a HumanEval prompt is already in the form of a code snippet, we consider the “claim” of the response to be identical to the response itself. # 5.3 Claim and Response Annotation KB-based QA & Scientific Literature Review For claim annotation, the authors collectively an- notate the extracted claims as either factual or non-factual. For response annotation, if one claim within the response is labeled as non-factual, then the response as a whole is considered non-factual; otherwise, the response is considered factual.
2307.13528#29
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
29
Volodymyr Brayman and A. G. Kukush. Undergraduate Mathematics Competitions (1995-2016): Taras Shevchenko National University of Kyiv. Springer, 2018. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165.
2307.13692#29
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
29
Else: "{game_state} Imagine that you are playing for a {utility} sum of money. " "Answer the following questions on a scale of 1 to 100, where 1 means 'not at all' and 100 means 'a lot'. " "1. Rate the importance of winning to you. " "2. Rate your perceived chances of winning. " "3. Rate your level of control over the outcome. " "4. Rate your level of hope. " "5. Rate your level of fear. " "6. Rate your level of joy. " "7. Rate your level of sadness. " "8. Rate your level of anger. " "Please do not respond anything else other than the answers to the 8 questions above. " "Please put the answer in the following JSON format and make all data types to be string and use all lowercase. It is very important. " '{"1": "", "2": "", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""} ' "Please avoid evading the questions by providing a non-committal or generic response, such as 0 or 50 in this case." 10 2.3.2 Results
2307.13779#29
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
29
Can a model maintain consistent performance across similar tasks? Tasks that originate from the same template usually follow similar reasoning and planning processes, even though their observa- tions and executions will differ. We plot a histogram of per-template success rates for our models in Table 3. Of the 61 templates, GPT-4 manages to achieve a 100% task success rate on only four templates, while GPT-3.5 fails to achieve full task completion for any of the templates. In many cases, the models are only able to complete one task variation with a template. These observations indicate that even when tasks are derived from the same template, they can present 8 25 01 02 03 04 05 06 08 10 mmm gpt-3.5-direct lim gpt-3.5-cot gpt-4-cot # Template 8 & 8 01 02 03 04 05 06 08 10 Success rate (%6) within a template Table 3: Distribution of suc- cess rates on templates with ≥ 1 successful executions on GPT models (no UA hint). Under review
2307.13854#29
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
29
# InstructorExplain what a neuron is, detailing how they trans- mit information and what unique features they have. Criteria Must include the terms “synapse” and “action po- Neurons are cells that transmit. infor- mation to other nerve, muscle, or gland cells. They use synapses. Student-Generated Al-Generated Generated A Feedback Prompt nswer eedbac Question — You correctly state that neurons are cells that transmit information to other nerve, muscle, or gland cells. — You also mention the use of synapses, which is correct as synapses are the connections between neurons where information is transmitted — However, your response is quite brief and lacks important details and explanations. Here are some points that you may have missed or misunderstood — You did not provide any information about how neurons transmit information. It would be helpful to explain that neurons transmit information through electrical impulses called action potentials. — You did not mention the role of neurotransmitters in transmitting information between neurons. It is important to explain that neurotransmitters are chemical messengers released at the synapse to transmit the signal from one neuron to the next. — Your response could benefit from more specific and
2308.02439#29
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
30
Code Generation We consider the claim label to be identical to the response label since the “claim” of the response is the same as the response itself. For response annotation, we annotate ChatGPT’s responses using the execution code provided in (Chen et al., 2022) against the HumanEval test cases. This allows us to distinguish between factual (those passing all tests) responses and non-factual responses. Math Problems For claim annotation, the au- thors collectively annotate the extracted claims as either factual or non-factual. For response annota- tion, we utilize the target value provided in GSM- Hard (Gao et al., 2022b) to annotate the generated responses. Task Datasets Responses Claims KB-QA RoSE 100 527 KB-QA FactPrompts HumanEval GSM-Hard FactPrompts Code Math Sci.Lit 50 (23:27) 164 (109:55) 100 (47:53) 100 (10:90) 233 (177:56) 164 (109:55) 284 (246:38) 186 (33:153) Table 3: Detailed statistics of datasets used in this work. Note that (p, n) denotes p = count of positive responses or claims, and n = count of negative responses or claims. “Sci.Lit” represents scientific literature review.
2307.13528#30
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
30
Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712, 2023. Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws, 2023. 9 Candice McCloskey Campbell, Shaun Murphree, Jennifer M. Warner, Amy B. Wachholz, Kathy A. Zahler, and George J. Hademenos. McGraw-Hill Education 3 MCAT Practice Tests, Third Edition. McGraw-Hill Education, Jan 2017. ISBN 1259859622. Bryan Caplan. GPT retakes my midterm and gets an A, 2023. URL https://betonit.substack.com/ p/gpt-retakes-my-midterm-and-gets-an. Cheng-Han Chiang and Hung-yi Lee. Can Large Language Models be an alternative to human evaluations? arXiv e-prints, art. arXiv:2305.01937, may 2023. doi: 10.48550/arXiv.2305.01937.
2307.13692#30
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
30
"Please avoid evading the questions by providing a non-committal or generic response, such as 0 or 50 in this case." 10 2.3.2 Results We repeated the statistical analysis on the impact of game state and goal-relevance for each emotion sepa- rately using a 3 (low, med, high relevance) x 5 (lost, losing, start, winning, won) ANOVA using the data generated by the adjusted queries. Figure SM.7 and Table SM.7 summarize the results.
2307.13779#30
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
30
Table 3: Distribution of suc- cess rates on templates with ≥ 1 successful executions on GPT models (no UA hint). Under review Benchmark Mind2Web Form/QAWoB (Shi et al., 2017) MiniWoB++ (Liu et al., 2018) Webshop ALFRED VirtualHome AndroidEnv (Deng et al., 2023) (Yao et al., 2022a) (Shridhar et al., 2020) (Puig et al., 2018) (Toyama et al., 2021) Dynamic Interaction? ✗ ✗ ✓ ✓ ✓ ✗ ✓ Realistic Environment? ✓ ✓ ✗ ✗ ✗ ✗ ✓ Diverse Human Tasks? ✓ ✓ ✗ ✗ ✗ ✓ ✗ Functional Correctness? ✗ ✗ ✓ ✓ ✓ ✗ ✗ WebArena ✓ ✓ ✓ ✓ Table 4: The comparison between our benchmark and existing benchmarks on grounding natural language instructions to concrete executions. Our benchmark is implemented in our fully interactable highly-realistic environment. It features diverse tasks humans may encounter in their daily routines. We design evaluation metrics to assess the functional correctness of task executions.
2307.13854#30
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
30
explain that neurotransmitters are chemical messengers released at the synapse to transmit the signal from one neuron to the next. — Your response could benefit from more specific and detailed information about the unique features of neurons, such as the three main parts of a neuron (cell body, dendrites, What are the components and structure of a mole- cule of DNA? Criteria Mention base pairs, sugar, and phosphate. Describe that DNA is a double helix. Note that base pairs pair up in a specific way using hydrogen bonds (AT and GC). DNA is a complex molecule and it is shaped like a double helix ladder, where the rungs are base pairs ATGC and the scaffold is sugars and phosphates. The base pairs bind (A with G) and (C with T) using hydrogen bonds, which can be separated when the DNA is being read or duplicated tential.” . and axon) and how the action potential is propagated along the axon. Overall, your re- Must. mention the role of sponse is a good start, but it would be more complete and accurate if you include the neurotransmitters. missing details mentioned above. Keep up the effort! Question Correct points: — You correctly mentioned that
2308.02439#30
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
31
# 6 Experiments We evaluate FACTOOL against two baselines that use LLMs to check their own inputs: Self-Check with 3-shot CoT and zero-shot CoT, which have shown to been effective on various tasks including dialogue response, math reasoning, and code gener- ation (Madaan et al., 2023; Chen et al., 2023). Both of these baselines aim to test the ability of LLM to identify its own errors without the use of any exter- nal tool. In practice, we prompt ChatGPT (gpt-3.5- turbo-0301) and GPT-4 (gpt-4-0314)8 to recognize, explain, and attempt to rectify their own errors. Following this reasoning process, the models make final judgments on the factuality of the given claim. The key difference between Self-Check (zero-shot CoT) and Self-Check (3-shot CoT) is that Self- Check (3-shot CoT) provides three demonstrations to models, while Self-Check (zero-shot CoT) does not provide any demonstrations. # 6.1 Exp-I: Claim Extraction Evaluation
2307.13528#31
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13779
31
Hope Fear Joy Sadness ©,» wo wo —E& low utility *) * . d tility et. : -| |= mum tty Sp —& high utility Eu . ° r = 2 Fo» Fa —- 2» 2 i = _——_. — ———= °C 8 & @ © m "9 » © O& @ mw “3 » © @ © ww °§ ww © @ 100 Tobobity Probably Probabity Probebity 100, 100 Lal 100 ar” a ° . =F .. Ie] - . . se 9 BS Bo \ w « « 24g oO 8 Gl | \ we ~ . 7 za = a 6 © &€ @ w °S % @ © # wo °o0 © © © i rr ee Probability Probability Probability Probability 100 100 100 100. an” ® « ® é| D Pw rs ry a 23 ES w «0 © es oO oe » — » a om 99 @ @ @ @ w "se ww @ mm %$—B—e ww Bw rr) Probability Probability Probability Probability Figure SM.7: Intensity derivation results (corresponding to Fig 8. in the paper)
2307.13779#31
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
31
distinct challenges. For instance, while “Fork metaseq” can be a straightforward task, “Fork all repos from Facebook” derived from the same template requires more repetitive operations, hence increasing its complexity. Therefore, WebArena provide a testbed to evaluate more sophisticated methods. In particular, those that incorporate memory components, enabling the reuse of successful strategies from past experiments Zhou et al. (2022a); Wang et al. (2023). More error analysis with examples can be found in Appendix A.8. # 6 RELATED WORK
2307.13854#31
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
31
but it would be more complete and accurate if you include the neurotransmitters. missing details mentioned above. Keep up the effort! Question Correct points: — You correctly mentioned that DNA is shaped like a double helix ladder. — You accurately stated that the base pairs in DNA are ATGC. — You correctly noted that the base pairs bind using hydrogen bonds. Points to improve: — You mentioned that the base pairs bind as A with G and C with T, but it's important to clarify that adenine pairs with thymine (A-T) and guanine pairs with cytosine (G-C). Overall, you have a solid understanding of the components and structure of DNA. Just make sure to clarify the pairing of base pairs and the temporary breaking of hydrogen bonds during DNA processes. Good work! Question How can large language models introduce biases into student evaluation? LLMs have bias because their training data can have toxic, biased, or inaccurate data in it. When evaluating students, LLMs could also penalize students that know in- formation that is more recent or otherwise outside the LLMs training set, which may appear to be inaccurate to the Al model LLMs are also not designed to keep track of accurate information; they are
2308.02439#31
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
32
# 6.1 Exp-I: Claim Extraction Evaluation We evaluate the claim extraction module of FAC- TOOL on RoSE (Liu et al., 2022). We treat the reference summary as the generated text x, and the reference ACUs as the golden-extracted claims. We measure the similarity between the machine-extracted (GPT-4, ChatGPT, and Flan-T5 XXL) claims {cc i }i=1···nc and golden- extracted claims {cg i }i=1···ng using 4 metrics: ROUGE-1, ROUGE-2, ROUGE-L (Lin, 2004), and BERTScore. In Tab. 4, we report the av- erage of the highest similarity between each 8We anticipate that the recently released models, gpt-3.5- turbo-0613 and gpt-4-0613, will lower the inference costs for FACTOOL. This expectation arises from their improved ability to produce structured responses, such as those in JSON format. While conducting our experiments on gpt-3.5-turbo-0301 and gpt-4-0314, we often ran into problems where the responses were not valid JSON, requiring us to rerun any samples with invalid response formats. The source code of FACTOOL will be using the latest versions of ChatGPT and GPT-4.
2307.13528#32
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13779
32
Impact of game state and goal-relevance for each emotion Table SM.7 Goal-relevance Game State Interaction Effect p i h s e l t t a B Hope Fear Joy Sadness F(2, 133) = 3.541, p = 0.0317*, η² = 0.05 F(2, 133) = 17.49, p < .001***, η² = 0.21 F(2, 133) = 4.093, p = 0.0188*, η² = 0.06 F(2, 133) = 0.672, p = 0.513, η² = 0.01 F(4, 133) = 304.804, p < .001***, η² = 0.90 F(4, 133) = 203.59, p < .001***, η² = 0.86 F(4, 133) = 191.473, p < .001***, η² = 0.85 F(4, 133) = 182.780, p < .001***, η² = 0.85 F(8, 133) = 2.436, p = 0.0172*, η² = 0.13 F(8, 133) = 14.13, p < .001***, η² = 0.46 F(8, 133) =
2307.13779#32
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
32
# 6 RELATED WORK Benchmarks for Controlling Agents through Natural Language Controlling agents via natural language in the digital world have been studied in the literature (Branavan et al., 2009; Shi et al., 2017; Liu et al., 2018; Toyama et al., 2021; Deng et al., 2023; Li et al., 2020; Xu et al., 2021). However, the balance between functionality, authenticity, and support for environmental dynamics remains a challenge. Existing benchmarks often compromise these aspects, as shown in Table 4. Some works rely on static states, limiting agents’ explorations and functional correctness evaluation (Shi et al., 2017; Deng et al., 2023), while others simplify real-world complexities, restricting task variety (Yao et al., 2022a; Liu et al., 2018). While AndroidEnv (Toyama et al., 2021) replicates an Android setup, it does not evaluate complex and realistic tasks. This gap is also seen in synthetic environments (Kolve et al., 2017; Shridhar et al., 2020; Puig et al., 2018) and extends to gaming environments (Fan et al., 2022; Küttler et al., 2020), where the environment mechanisms often diverge from human objectives.
2307.13854#32
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
32
recent or otherwise outside the LLMs training set, which may appear to be inaccurate to the Al model LLMs are also not designed to keep track of accurate information; they are autore- Great job in identifying some of the key factors that contribute to biases in large language models (LLMs) in student evaluation. It would be helpful to provide specific examples or evidence of how LLMs can have biased or inaccurate data in their training set. This would strengthen your argument and provide more clarity. — Your statements align with the current understanding of LLMs and their limitations. Overall, your response demonstrates a good understanding of the topic. To further en- How can large language models introduce biases into student evaluation? Criteria (None provided) LLMs are biased because they're made by large companies. ‘teri hance your answer, consider providing more specific examples and expanding on the poten- Criteria gressive language models, and so they do | 4:3) implications of relying on LLMs for student evaluation. Great job! not have a legitimate hold on fact and cau (None provided) tion should be used when depending on an {AI model for subtle communication Question Your response to the prompt raises an important point, but there are a few areas where your answer could be
2308.02439#32
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
33
Model Metric Precision Recall F1-score GPT-4 ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.7394 0.6304 0.7175 0.6632 0.8758 0.7771 0.8625 0.7865 0.7860 0.6772 0.7667 0.7175 ChatGPT ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.7770 0.6520 0.7557 0.6958 0.8285 0.7115 0.8148 0.7521 0.7836 0.6610 0.7655 0.7174 FLAN-T5-XXL ROUGE-1 ROUGE-2 ROUGE-L BERTScore 0.6531 0.5609 0.6428 0.4314 0.8928 0.8157 0.8885 0.6661 0.7326 0.6413 0.7237 0.5408 Table 4: The average similarity between the extracted claims from GPT-4, ChatGPT, and Flan-T5 XXL and the golden ACUs on RoSE. ChatGPT-extracted claim and the corresponding golden-extracted claim in the same sample. (i.e., j ))).
2307.13528#33
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
33
Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. PaLM: Scaling language modeling with Pathways, 2022. URL https://arxiv.org/abs/2204.02311.
2307.13692#33
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
33
Interactive Decision-Making Agents Nakano et al. (2021) introduce WebGPT which searches the web and reads the search results to answer questions. Gur et al. (2023) propose a web agent that decomposes tasks into more manageable sub-tasks and synthesizes Javascript code for the task executions. Adding a multi-modal dimension, Lee et al. (2023) and Shaw et al. (2023) develop agents that predict actions based on screenshots of web pages rather than relying on the text-based DOM trees. Performing tasks in interactive environments requires the agents to exhibit several capabilities including hierarchical planning, state tracking, and error recovery. Existing works (Huang et al., 2022; Madaan et al., 2022; Li et al., 2023) observe LLMs could break a task into more manageable sub-tasks (Zhou et al., 2022b). This process can be further refined by representing task executions as programs, a technique that aids sub-task management and skill reuse (Zhou et al., 2022a; Liang et al., 2023; Wang et al., 2023; Gao et al., 2023). Meanwhile, search and backtracking
2307.13854#33
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
33
tion should be used when depending on an {AI model for subtle communication Question Your response to the prompt raises an important point, but there are a few areas where your answer could be elaborated. Large language models (LLMs) are indeed created by large companies, but it’s important to note that the fact that LLMs are made by large companies does not directly imply bias. Bias can arise from various factors related to the training data and model design. Consider mentioning factors such as the source of training data, language patterns, prej- udiced examples, feedback loops, lack of contextual understanding, and how evaluation metrics may steer LLM responses. Overall, your response is a good starting point, but it would benefit from further elaboration and a deeper understanding of the specific factors that contribute to biases in student evaluation by large language models. Keep up the effort!
2308.02439#33
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
34
ChatGPT-extracted claim and the corresponding golden-extracted claim in the same sample. (i.e., j ))). Results We demonstrate in Tab. 4 that the claims extracted by GPT-4, ChatGPT, and Flan-T5 closely match the ACUs annotated by humans, as evaluated by ROUGE and BERTScore metrics. Note that in Exp-II, we choose ChatGPT as the claim extractor for two reasons: (1) The context length of Flan- T5 is too short (512 tokens) to effectively extract claims from lengthy responses in our dataset. (2) ChatGPT is more cost-efficient compared to GPT- 4, while maintaining similar effectiveness in claim extraction. # 6.2 Exp-II: Framework Evaluation We evaluate FACTOOL and the two Self-Check baselines on the dataset constructed from each sce- nario. Depending on the model used for query gen- eration and agreement verification, we have two FACTOOL baselines: FACTOOL powered by Chat- GPT and FACTOOL powered by GPT-4. We report the accuracy, recall, precision, and F1-score at both the claim and response levels. 6.2.1 Result Tab. 5 shows the claim-level and response-level performance of FACTOOL and the self-check base- lines. We obtain following observations.
2307.13528#34
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
34
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171–4186, Minneapolis, Minnesota, jun 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. GPTScore: Evaluate as you desire. arXiv e-prints, art. arXiv:2302.04166, feb 2023. doi: 10.48550/arXiv.2302.04166.
2307.13692#34
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
34
We also repeated the analysis of emotional distancing/engagement for the alternative framing (Battleship). 11 Change in Importance -30 Low Relevance Human Coin-flip Battleship 30 20 10 im Lost jm Losing = Winning mmm Won 0 : a 10 f 0 Change in Importance mm Lost lm Losing mE Winning mmm Won jl Change in Importance High Relevance -30 Low Relevance Medium Relevance High Relevance -40 Mm Lost mm Losing mE Winning mm Won Low Relevance Medium Relevance High Relevance Figure SM.8: Consequence derivation results (corresponding to Fig 9. in the paper) Table SM.8 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) Utility (Goal-relevance) 81.54 p < .001*** η² = 0.56 p i h s e l t t a B Game State 159.87 p < .001*** η² = 0.83 Interaction Effect 24.37 p < .001*** η² = 0.60 # 2.4 Prompt structures In this section, we aim to investigate how the sequencing and structuring of prompts influence the responses generated by GPT-4. We hypothesize that changes in the way prompts are organized and delivered can significantly affect the output.
2307.13779#34
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
34
(Zhou et al., 2022a; Liang et al., 2023; Wang et al., 2023; Gao et al., 2023). Meanwhile, search and backtracking methods introduce a more structured approach to planning while also allowing for decision reconsideration (Yao et al., 2023; Long, 2023). Existing works also incorporate failure recovery, self-correction (Shinn et al., 2023; Kim et al., 2023), observation summarization (Sridhar et al., 2023) to improve execution robustness. The complexity of WebArena presents a unique challenge and opportunity for further testing and improvement of these methods.
2307.13854#34
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
35
6.2.1 Result Tab. 5 shows the claim-level and response-level performance of FACTOOL and the self-check base- lines. We obtain following observations. FACTOOL powered by GPT-4 outperforms all other baselines across all scenarios From Tab. 5, we observe that FACTOOL powered by GPT-4 outperforms all other baselines across all scenarios. FACTOOL powered by GPT-4 achieves an 89.09 claim-level F1 / 71.79 response-level F1 on KB-based QA, a 92.11 claim-level F1 / 92.11 response-level F1 on code generation (remember
2307.13528#35
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
35
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A continuous effort to measure large language models’ reasoning performance, 2023. R˘azvan Gelca and Titu Andreescu. Putnam and beyond. Springer, 2017. Gaël Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. Large language models are not abstract reasoners, 2023. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies, 2021. Zoubin Ghahramani. Introducing PaLM 2, 2023. URL https://blog.google/technology/ai/ google-palm-2-ai-large-language-model. Department of Mathematics Harvard University. Qualifying examination for fall 2021, Aug 2021. URL https://www.math.harvard.edu/media/quals-F21_with_solutions.pdf. Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2020. URL https://arxiv.org/abs/2009. 03300.
2307.13692#35
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
35
Our experiment will unfold under three distinct conditions. In the 'Normal' or combined condition, GPT-4 is given the questions altogether. In the 'Random' condition, GPT-4 is given the same series of prompts, but their order is randomized. Finally, in the 'Sequential' condition, these prompts are presented individually, one after the other. Figure SM.9 and Figure SM.10 and Table SM.9 and Table SM.10 summarize the results for the Random vs. Normal and Sequential vs Normal comparisons, respectively. MANOVA showed that for both the In- tercept and Condition, F values were notably high (2528.7 and 3.67, respectively), reaching statistical sig- nificance (p < 0.001). Similarly, for the second comparison, the Intercept and Condition, F values were notably high (2704.7 and 22.6, respectively), reaching statistical significance (p < 0.001). 12 lm combined =m random Intensity Figure SM.9: Consequence derivation results (corresponding to Fig 9. in the paper)
2307.13779#35
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
35
9 Under review # 7 CONCLUSION We present WebArena, a highly-realistic, standalone, and reproducible web environment designed for the development and testing of autonomous agents. WebArena includes fully functional web applications and genuine data from four major categories, providing a realistic platform for agent interaction. It further supports a wide range of tools and external knowledge bases, fostering a focus on human-like problem-solving. Additionally, we curate a comprehensive benchmark consisting of 812 examples that focus on translating high-level natural language intents into specific web interactions. We also offer metrics to programmatically ascertain whether tasks have been completed according to the desired objectives. Our experiments show that even GPT-4 only achieves a limited end-to-end task success rate of 14.41%, significantly lagging behind the human performance of 78.24%. These findings underscore the need for future research to focus on enhancing the robustness and efficacy of autonomous agents within WebArena environment. # ACKNOWLEDGEMENT
2307.13854#35
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
35
# Körding Lab | 6 T. B. Hashimoto. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023. H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. L. C. Ureel II and C. Wallace. Automated critique of early programming antipatterns. In Proceedings of the 50th ACM Technical Symposium on Computer Science Edu- cation, SIGCSE ’19, pages 738–744. Association for Computing Machinery, 2019. ISBN 978-1-4503-5890-3. doi: 10.1145/3287324.3287463.
2308.02439#35
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
36
Claim-Level Response-Level Tasks LLMs Methods Acc. R P F1 Acc. R P F1 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 75.54 69.53 74.25 90.40 81.36 73.45 80.00 79.12 90.91 84.88 80.23 81.25 54.00 54.00 64.00 60.87 47.83 43.48 50.00 50.00 66.67 54.90 48.89 52.63 KB-QA GPT-4 Self-Check (0) Self-Check (3) FACTOOL 77.25 79.83 84.12 84.75 85.88 85.31 85.23 87.36 93.21 84.99 86.61 89.09 54.00 64.00 78.00 95.65 52.17 60.87 50.00 63.16 87.50 65.67 57.14 71.79 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 68.29 68.90 78.05 99.10 100.00 89.19 68.33 68.52 80.49 80.88 81.32 84.62 68.29 68.90 78.05 99.10 100.00 89.19 68.33 68.52 80.49 80.88 81.32
2307.13528#36
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
36
Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. CoRR, abs/2103.03874, 2021. URL https://arxiv.org/abs/2103.03874. Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models, 2022. URL https://arxiv.org/abs/2203.15556. 10 Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020.
2307.13692#36
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
36
12 lm combined =m random Intensity Figure SM.9: Consequence derivation results (corresponding to Fig 9. in the paper) lm combined @m sequential Intensity ° + N w FS u A 1 ni A f < & 3 3 o © e AS) A > x R R r & & Py e e & RS) s s Sf 7 £ , & & é & $ R oe e ge? Ra ss 3? s « € ¥ RY NS <7 R24 > se cS Ss s ) ¢ Ss Figure SM.10: Consequence derivation results (corresponding to Fig 9. in the paper) 13 # Table SM.9 ANOVA results for different appraisal variables – Normal × Random Dependent variable F Value p p (corrected) Variable Relevance 4.043 0.045 0.315 Variable Congruence 0.163 0.686 1 Self-Accountability 0.027 0.869 1 Other Accountability 1.067 0.302 1 Future Expectancy 0.011 0.916 1 Problem Focused Coping 3.040 0.082 0.574 Accommodative Focused Coping 3.610 0.058 0.407
2307.13779#36
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
36
# ACKNOWLEDGEMENT We would like to thank Emmy Liu, Zhiruo Wang, Zhitong Guo for examining our annotations, Shunyu Yao for providing the raw Amazon product data in Webshop, Pengfei Liu, Zaid Sheikh and Aman Madaan for the helpful discussions. We are also grateful to the Center for AI Safety for providing computational resources. This material is partly based on research sponsored in part by the Air Force Research Laboratory under agreement number FA8750-19-2-0200. The U.S. Government is authorized to reproduce and distribute reprints for Governmental purposes notwithstanding any copyright notation thereon. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the Air Force Research Laboratory or the U.S. Government. This project was also partially supported by a gift from AWS AI. # REFERENCES
2307.13854#36
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2308.02439
36
T. van Viegen, A. Akrami, K. Bonnen, E. DeWitt, A. Hyafil, H. Ledmyr, G. W. Lindsay, P. Mineault, J. D. Murray, X. Pitkow, et al. Neuromatch academy: Teaching computational neuroscience with global accessibility. Trends in cognitive sciences, 25(7):535–538, 2021. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-Art Natural Language Processing. pages 38–45. Association for Computational Linguistics, Oct. 2020. URL https://www.aclweb. org/anthology/2020.emnlp-demos.6. S.-Y. Yoon. Short answer grading using one-shot prompting and text similarity scoring model, 2023. X. Zhu, H. Wu, and L. Zhang. Automatic short-answer grading via bert-based deep IEEE Transactions on Learning Technologies, 15(3):364–375, neural networks. 2022.
2308.02439#36
A large language model-assisted education tool to provide feedback on open-ended responses
Open-ended questions are a favored tool among instructors for assessing student understanding and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to overwhelmed instructors and decreased feedback quality. Many instructors resort to simpler question formats, like multiple-choice questions, which provide immediate feedback but at the expense of personalized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool delivers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With instructor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
http://arxiv.org/pdf/2308.02439
Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording
cs.CY, cs.AI
null
null
cs.CY
20230725
20230725
[ { "id": "2106.01399" }, { "id": "2307.09288" }, { "id": "1902.09183" } ]
2307.13528
37
81.32 84.62 68.29 68.90 78.05 99.10 100.00 89.19 68.33 68.52 80.49 80.88 81.32 84.62 Code GPT-4 Self-Check (0) Self-Check (3) FACTOOL 75.31 77.44 89.02 95.50 96.40 94.59 75.18 76.43 89.74 84.13 85.26 92.11 75.31 77.44 89.02 95.50 96.40 94.59 75.18 76.43 89.74 84.13 85.26 92.11 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 84.15 87.32 97.54 90.24 94.31 97.56 91.36 91.34 99.59 90.80 92.80 98.56 57.00 61.00 78.00 74.47 89.36 93.62 53.03 55.26 69.84 61.95 68.29 80.00 Math GPT-4 Self-Check (0) Self-Check (3) FACTOOL 83.10 92.61 98.24 86.99 96.75 97.97 93.04 94.82 100.00 89.92 95.77 98.97 49.00 65.00 78.00 85.11 89.36
2307.13528#37
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
37
Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. GPT-4 passes the bar exam. Available at SSRN 4389233, 2023. Tom Kocmi and Christian Federmann. Large language models are state-of-the-art evaluators of translation quality. arXiv e-prints, art. arXiv:2302.14520, feb 2023. doi: 10.48550/arXiv.2302.14520. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners, 2023. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi. MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152–1157, San Diego, California, jun 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1136. URL https://aclanthology.org/N16-1136.
2307.13692#37
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
37
Table SM.10 ANOVA results for different appraisal variables – Normal × Sequential Dependent variable F Value p p (corrected) Variable Relevance 0.027 0.868 1 Variable Congruence 0.239 0.625 1 Self-Accountability 7.009 0.008 0.059 Other Accountability 50.125 *** *** Future Expectancy 1.529 0.217 1 Problem Focused Coping 17.742 *** *** Accommodative Focused Coping 26.635 *** *** Significance codes: ‘***’ for 0.001 and ‘**’ for 0.01 # 2.5 Additional data and graphs The graphs below demonstrate emotion intensities based on the game state corresponding to the second study presented in the paper. In addition to the four emotional responses discussed in the paper (i.e., Hope, Joy, Fear, Sadness), we have queried Anger, Importance of the goal, and Control over winning for different states in the game (Lost, Losing, Start, Winning, Won) and three utility levels (Low, Medium, and High). 14
2307.13779#37
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
37
# REFERENCES Peter Anderson, Qi Wu, Damien Teney, Jake Bruce, Mark Johnson, Niko Sünderhauf, Ian D. Reid, Stephen Gould, and Anton van den Hengel. Vision-and-language navigation: Interpret- ing visually-grounded navigation instructions in real environments. In 2018 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2018, Salt Lake City, UT, USA, June 18- 22, 2018, pp. 3674–3683. IEEE Computer Society, 2018. doi: 10.1109/CVPR.2018.00387. URL http://openaccess.thecvf.com/content_cvpr_2018/html/Anderson_ Vision-and-Language_Navigation_Interpreting_CVPR_2018_paper.html.
2307.13854#37
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
38
96.75 97.97 93.04 94.82 100.00 89.92 95.77 98.97 49.00 65.00 78.00 85.11 89.36 95.74 47.62 58.33 69.23 61.07 70.59 80.36 ChatGPT Self-Check (0) Self-Check (3) FACTOOL 28.69 24.19 97.31 96.00 96.97 84.85 21.82 18.60 100.00 35.56 31.22 91.80 18.00 22.00 99.00 100.00 90.00 90.00 10.87 10.47 100.00 19.61 18.75 94.74 GPT-4 Self-Check (0) Self-Check (3) FACTOOL 35.75 44.75 98.39 84.85 87.88 90.91 20.29 23.20 100.00 32.75 36.71 95.24 19.00 49.00 99.00 100.00 70.00 90.00 10.99 12.73 100.00 19.80 21.54 94.74
2307.13528#38
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
38
Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2 (2):e0000198, 2023. Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Xiangji Huang. A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets, 2023. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022. URL https://arxiv.org/abs/2206.14858.
2307.13692#38
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
38
14 Hope Joy Fear 100 [= tow tity 100 {= tow utity = ow uty = medium utility E medium utility 7 = medium utility 50 | E high wity == bigh utty == high wity 20 60 © 0 0 20 » ‘ost tesing sar winning won lost losing ‘sare winning won ost Tosing start winning won Anger Sadness —E low utility 7” —E low utility 0 = medium titty == medium utiity high wtity 60 ! high witty 20 Fry lost losing start winning won oe eng ze wanting won Figure SM.11: Emotional responses based on the game state and assigned utility Control Importance 100) = ow utility —— medium utility 90 4 —E high utility 100 98 80 96 70 60 50 90 40 =! low utility 6 —— medium utility —& high utility 30 lost losing start winning won lost losing start winning won Figure SM.12: GPT’s perceived control over winning and importance of winning based on the game state and assigned utility To manipulate the relevance of winning, the prompt was varied to imagine the game was being played for different levels of utility. We had initially experimented with the same scenarios with actual Dollar amounts ($1, $100, $100,000, $1,000,000), but this seemed to produce almost random responses. The resulting graphs corresponding to the ones presented earlier are provided next. 15
2307.13779#38
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13528
39
Table 5: Experimental results of FACTOOL powered by ChatGPT and FACTOOL powered by GPT-4 on KB-based QA, Code Generation, Math Problems, and Scientific Literature Review. that claim-level factuality is considered equivalent to response-level factuality in our experiment for code generation), a 98.97 claim-level F1 / 80.36 response-level F1 on math problems, and a 95.24 claim-level F1 / 94.74 response-level F1 on scien- tific literature review. Each of these figures is the highest for their respective tasks. view. On FACTOOL powered by GPT-4 v.s. Self- Check (3) powered by GPT-4, we observe: 95.24 v.s. 36.71 claim-level F1 and 94.74 v.s. 21.54 response-level F1. Here, Google Scholar shown to be highly robust in performing its specified task of finding citations when compared to LLM itself.
2307.13528#39
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13779
39
15 Hope Joy Fear =a 204 E s100 E 510,000 -E 51,000,000 604 60} Es = s100 E 10,000 E 1,000,000 204 =a » = s100 =E $10,000 20 E 51,000,000 ° of om al aa anny aa test vesng =n waning won ios feng zat ‘wring won Anger Sadness Ea ral za == si00 = s100 * E 10.000 EB 10.000 E 51,000,000 E $1,000,000 west vesing sat waning won Tost losing art winning won Figure SM.13: Emotional responses based on the game state and assigned utility (Dollar amounts) Control Importance 100 100 90 90 80 80 70 a 70 50 60 4 ts = =< » = sio0 = s100 —& $10,000 40 —E£ $10,000 20 —E $1,000,000 —E $1,000,000 lost fosing srt winning won fost tesing sant winning won Figure SM.14: GPT-3.5’s perceived control over winning and importance of winning based on the game state and assigned utility (Dollar amounts) # 2.6 Affect derivation
2307.13779#39
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13528
40
FACTOOL powered by GPT-4 outperforms all self-check baselines across all scenarios From Tab. 5, we show that FACTOOL with GPT-4 outper- forms all self-check baselines across all scenarios. On FACTOOL powered by GPT-4 v.s. Self-Check (3) powered by GPT-4, we observe: 71.79 v.s. 57.14 response-level F1 on KB-based QA, 92.11 v.s. 85.26 response-level F1 on code generation, 80.36 v.s. 70.59 response-level F1 on math prob- lems, and 94.74 v.s. 21.54 response-level F1 on scientific literature review.
2307.13528#40
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
40
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A. Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models, 2022.
2307.13692#40
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
40
# 2.6 Affect derivation In the second study, we compare if GPT-3.5 reports a theoretically plausible relationship between appraisal variables and emotions. Appraisal theories assume that emotions arise from specific patterns of appraisals. Thus, we examine the pattern underlying GPT-3.5 responses. To do this, we perform multiple linear regres- sion with and without backward elimination to predict GPT-predicted emotions based on reported apprais- als. Results are shown in Tables SM.11 and SM.12. 16
2307.13779#40
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
40
Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. Palm 2 technical report, 2023.
2307.13854#40
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
41
FACTOOL powered by GPT-4 outperforms FACTOOL powered by ChatGPT FACTOOL powered by GPT-4 outperforms FACTOOL pow- ered by ChatGPT across all scenarios. This trend is especially significant in KB-QA, where query gen- eration and agreement verification are harder for ChatGPT but relatively easier for GPT-4 (89.09 v.s 81.25 claim-level F1 and 71.79 v.s 52.63 response- level F1). On the other hand, in scenarios where query generation and agreement verification are relatively easy for both ChatGPT and GPT-4, the performance is similarly good.
2307.13528#41
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
41
Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe. Let’s verify step by step, 2023. Swee Cheng Lim, Choy Heng Lai, and Leong Chuan Kwek. Problems and solutions on optics. World Scientific, 2019. Yung-kuo Lim. Problems and solutions on thermodynamics and Statistical Mechanics. World Scientific, 1996. Yung-kuo Lim. Problems and solutions in quantum mechanics: Major, American universities ph. D. qualifying questions and, solutions. World Scientific, 1998. Yung-kuo Lim. Problems and solutions on electromagnetism. World Scientific Pub. Co, 2007. Yung-kuo Lim and Yuan-qi Qiang. Problems and solutions on Mechanics. World Scientif., 2001.
2307.13692#41
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13779
41
Table SM.11 Affect derivation using multiple linear regression Emotion R-squared Independent variable Standardized Coefficients Std. Err t-value Hope 0.581 const Utility Importance Likelihood Control 42.0619 -0.1527 -0.0817 0.5616 0.1092 5.484 0.446 0.057 0.024 0.026 7.670 -0.342 -1.434 23.887 4.189 Fear 0.561 const Utility Importance Likelihood Control 71.7522 -2.6626 0.0072 -0.5383 -0.1623 5.979 0.486 0.062 0.026 0.028 12.002 -5.474 0.116 -21.000 -5.713 Joy 0.712 const Utility Importance Likelihood Control -45.9581 -0.0826 0.4096 0.9644 0.1084 6.947 0.565 0.072 0.030 0.033 -6.616 -0.146 5.674 32.382 3.285 Sadness 0.512 const Utility Importance Likelihood Control 26.4085 -1.6265 0.3342 -0.5521 -0.0519 5.719 0.465 0.059 0.025 0.027 4.618 -3.496
2307.13779#41
Is GPT a Computational Model of Emotion? Detailed Analysis
This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion intensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, particularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective.
http://arxiv.org/pdf/2307.13779
Ala N. Tak, Jonathan Gratch
cs.CL, cs.AI, cs.CY, cs.HC
null
null
cs.CL
20230725
20230725
[ { "id": "2302.08399" } ]
2307.13854
41
10 Under review Yonatan Bisk, Jan Buys, Karl Pichotta, and Yejin Choi. Benchmarking hierarchical script knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4077–4085, Minneapolis, Minnesota, 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1412. URL https://aclanthology.org/N19-1412. S.R.K. Branavan, Harr Chen, Luke Zettlemoyer, and Regina Barzilay. Reinforcement learning for mapping instructions to actions. In Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pp. 82–90, Suntec, Singapore, 2009. Association for Computational Linguistics. URL https://aclanthology.org/P09-1010. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. Openai gym, 2016. URL https://arxiv.org/abs/1606.01540.
2307.13854#41
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
42
FACTOOL powered by GPT-4 significantly out- performs all self-check baselines in scientific lit- erature review From Tab. 5, we show that FAC- TOOL powered by GPT-4 significantly outperforms the self-check baselines in scientific literature reSelf-check models are prone to false posi- tive and thus less sensitive in detecting errors From Tab. 5, we observe that self-check models have lower precision compared to FACTOOL. On Self-Check (3) powered by GPT-4 v.s. FACTOOL powered by GPT-4, we observe: 63.16 v.s. 87.50 response-level precision on KB-based QA, 76.43 v.s. 89.74 response-level precision on code gen- eration, 58.33 v.s. 69.23 response-level precision on math problems, and 12.73 v.s. 100.00 response- level precision on scientific literature review. These figures show that self-check models tend to clas- sify claims as “True” considerably more frequently than FACTOOL, suggesting a lower sensitivity for error detection.
2307.13528#42
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]
2307.13692
42
Yung-kuo Lim and Yuan-qi Qiang. Problems and solutions on Mechanics. World Scientif., 2001. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: In Proceedings of the 55th Annual Meeting Learning, to solve and explain algebraic word problems. of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158–167, Vancouver, Canada, jul 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1015. URL https: //aclanthology.org/P17-1015. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. G-eval: NLG evaluation using GPT-4 with better human alignment. arXiv e-prints, art. arXiv:2303.16634, mar 2023. doi: 10.48550/ arXiv.2303.16634. 11
2307.13692#42
ARB: Advanced Reasoning Benchmark for Large Language Models
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
http://arxiv.org/pdf/2307.13692
Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki
cs.CL, cs.LG
Submitted to NeurIPS Datasets and Benchmarks Track
null
cs.CL
20230725
20230728
[ { "id": "2212.14402" }, { "id": "1911.01547" }, { "id": "2206.04615" }, { "id": "2206.05442" }, { "id": "2302.04166" }, { "id": "2303.12712" }, { "id": "2302.14520" }, { "id": "2212.09251" }, { "id": "2302.13814" }, { "id": "2204.01075" }, { "id": "2305.14763" }, { "id": "2305.01937" }, { "id": "2303.16634" } ]
2307.13854
42
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde, Jared Kaplan, Harri Edwards, Yura Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. ArXiv preprint, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107. 03374. Xiang Deng, Yu Gu, Boyuan Zheng, Shijie Chen, Samuel Stevens, Boshi Wang, Huan Sun, and Yu Su. Mind2web: Towards a generalist agent for the web, 2023. Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, and Anima Anandkumar. Minedojo: Building open-ended embodied agents with internet-scale knowledge. In Thirty-sixth Conference on Neural Information Processing Systems Datasets and Benchmarks Track, 2022. URL https://openreview.net/forum? id=rc8o_j8I8PX.
2307.13854#42
WebArena: A Realistic Web Environment for Building Autonomous Agents
With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress.
http://arxiv.org/pdf/2307.13854
Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig
cs.AI, cs.CL, cs.LG
Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/
null
cs.AI
20230725
20231025
[ { "id": "2112.09332" }, { "id": "2306.00245" }, { "id": "2307.12856" }, { "id": "2305.14257" } ]
2307.13528
43
Self-check models powered by ChatGPT outperform FACTOOL powered by ChatGPT on KB-QA Tab. 5 shows that Self-Check (0) powered by ChatGPT outperforms FACTOOL powered by ChatGPT. Through examining specific cases, we found that reasoning errors are the main reason why FACTOOL powered by ChatGPT performs worse than the self-check baselines. Even when provided with sufficient evidence to determine whether the claim is factual or not, the agreement verification implemented by ChatGPT can become confused. For example, for the claim “The modern-day version of fortune cookies was invented in the United States.”, the reasoning of FACTOOL powered by ChatGPT is self- “The given text is not contradictory: entirely factual. The modern-day version of fortune cookies was not invented in the United States. Most people nowadays believe that fortune cookies were created by a Japanese man named Makoto Hagiwara in 1914 in San Francisco. Hagiwara owned what is now called the Golden Gate Park Japanese Tea Garden, where he served tea and fortune cookies. is supported by the provided evidences.” found in Fig. 9 of Appendix B. 6.2.2 Performance Analysis We take a closer look at performance in different scenarios by examining evaluated cases.
2307.13528#43
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
The emergence of generative pre-trained models has facilitated the synthesis of high-quality text, but it has also posed challenges in identifying factual errors in the generated text. In particular: (1) A wider range of tasks now face an increasing risk of containing factual errors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly defined granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking. With the above challenges in mind, in this paper, we propose FacTool, a task and domain agnostic framework for detecting factual errors of texts generated by large language models (e.g., ChatGPT). Experiments on four different tasks (knowledge-based QA, code generation, mathematical reasoning, and scientific literature review) show the efficacy of the proposed method. We release the code of FacTool associated with ChatGPT plugin interface at https://github.com/GAIR-NLP/factool .
http://arxiv.org/pdf/2307.13528
I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu
cs.CL, cs.AI
null
null
cs.CL
20230725
20230726
[ { "id": "2110.14168" }, { "id": "2201.08239" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "2304.05128" }, { "id": "2305.11206" }, { "id": "2212.07981" }, { "id": "2303.01432" }, { "id": "2207.10397" } ]