doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.14430 | 138 | 1.8 1.8 ââ No graph (0.1) ââ Static (0.1) 1.7 ââ No graph (0.2) 1.7 ââ Static (0.2) ââ No graph (0.5) a ââ Static (0.5) 1.6 ââ No graph (0.8) 3 1.6 ââ Static (0.8) ââ Skill-It =] ââ Skill-It S15 S15 a zg 1.4 3 1.4 1.3 1.3 1.2 1.2 Performance on stance detection Performance on stance detection 200 300 400 500 600 Steps 200 300 400 500 600 Steps
# a 3 i]
# a zg 3
Figure 32: Comparison of SKILL-IT versus using no graph (left) and static data selection (right) with η = 0.1, 0.2, 0.5, 0.8 on the Natural Instructions stance detection ï¬ne-tuning experiment. SKILL-IT attains lower validation loss than both no graph and static data selection.
36 | 2307.14430#138 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14430 | 139 | 36
Answerability Classification Cause Effect Classification Coreference Resolution Data To Text 3.10 âeâ Static (0.1) | 2.12 3.16 a> âe= Static (0.2) 2.37 4 â= Static (0.5) 3.14 ° 3.08 2.10 3 âe Static (0.8) 3 5 3 3 âe skillit 3 2.36 3 3.06 | 2.08 312 s 3.10 2.35 Dialogue Act Recognition Grammar Error Correction Keyword Tagging Overlap Extraction 2.80 8 2.78 § 2.36 2.42 2 2.78 g 3 2.76 2.34 2.40 2.76 3 s 2.74 2.32 2.38 2.74 Question Rewriting Textual Entailment Title Generation Word Analogy 1.70 8 2.62 2.52 3.05 3 soa 1.69 2.61 z s 2.50 $2.60 3.03 1.68 3 2.48 2.59 3.02 1.67 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 1000 2000 3000 4000 5000 Steps Steps Steps Steps | 2307.14430#139 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.14430 | 140 | Figure 33: Comparison of SKILL-IT versus using static data selection with η = 0.1, 0.2, 0.5, 0.8 on the Natural Instructions out-of-domain experiment. SKILL-IT attains the lowest validation loss on 7 out of 12 evaluation skills, and an average loss of 2.540 compared to a range of 2.541-2.551 for static data selection.
attains the lowest validation loss on 7 out of 12 evaluation skills. It has an average loss of 2.540 compared to a range of 2.541-2.551 for static data selection.
37 | 2307.14430#140 | Skill-it! A Data-Driven Skills Framework for Understanding and Training Language Models | The quality of training data impacts the performance of pre-trained large
language models (LMs). Given a fixed budget of tokens, we study how to best
select data that leads to good downstream model performance across tasks. We
develop a new framework based on a simple hypothesis: just as humans acquire
interdependent skills in a deliberate order, language models also follow a
natural order when learning a set of skills from their training data. If such
an order exists, it can be utilized for improved understanding of LMs and for
data-efficient training. Using this intuition, our framework formalizes the
notion of a skill and of an ordered set of skills in terms of the associated
data. First, using both synthetic and real data, we demonstrate that these
ordered skill sets exist, and that their existence enables more advanced skills
to be learned with less data when we train on their prerequisite skills.
Second, using our proposed framework, we introduce an online data sampling
algorithm, Skill-It, over mixtures of skills for both continual pre-training
and fine-tuning regimes, where the objective is to efficiently learn multiple
skills in the former and an individual skill in the latter. On the LEGO
synthetic in the continual pre-training setting, Skill-It obtains 36.5 points
higher accuracy than random sampling. On the Natural Instructions dataset in
the fine-tuning setting, Skill-It reduces the validation loss on the target
skill by 13.6% versus training on data associated with the target skill itself.
We apply our skills framework on the recent RedPajama dataset to continually
pre-train a 3B-parameter LM, achieving higher accuracy on the LM Evaluation
Harness with 1B tokens than the baseline approach of sampling uniformly over
data sources with 3B tokens. | http://arxiv.org/pdf/2307.14430 | Mayee F. Chen, Nicholas Roberts, Kush Bhatia, Jue Wang, Ce Zhang, Frederic Sala, Christopher Ré | cs.CL, cs.LG | null | null | cs.CL | 20230726 | 20230726 | [
{
"id": "2101.00027"
},
{
"id": "2005.14165"
}
] |
2307.13528 | 0 | Pengfei Liu1,7â
FACTOOL: Factuality Detection in Generative AI A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios Shiqi Chen3 Weizhe Yuan4 Kehua Feng1
# I-Chun Chern2 Stefï¬ Chern2 Pengfei Liu1,7â Chunting Zhou5 Junxian He6 Graham Neubig2
1Shanghai Jiao Tong University 2Carnegie Mellon University 3City University of Hong Kong 4New York University 5Meta AI 6The Hong Kong University of Science and Technology 7Shanghai Artiï¬cial Intelligence Laboratory
# Abstract
The emergence of generative pre-trained mod- els has facilitated the synthesis of high-quality text, but it has also posed challenges in identi- In fying factual errors in the generated text. particular: (1) A wider range of tasks now face an increasing risk of containing factual er- rors when handled by generative models. (2) Generated texts tend to be lengthy and lack a clearly deï¬ned granularity for individual facts. (3) There is a scarcity of explicit evidence available during the process of fact checking.
3 2 0 2
l u J 6 2 ] L C . s c [ | 2307.13528#0 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 0 | 3 2 0 2
l u J 8 2 ] L C . s c [
2 v 2 9 6 3 1 . 7 0 3 2 : v i X r a
# ARB: Advanced Reasoning Benchmark for Large Language Models
# Tomohiro Sawada1,2,â, Daniel Paleka1,3, Alexander Havrilla1,2, Pranav Tadepalli1,2, Paula Vidas1,
Alexander Kranias1,2, John J. Nay4,5, Kshitij Gupta1,6, Aran Komatsuzaki1,2,â¡â¡
1 DuckAI 2 Georgia Tech 3 ETH Zürich 4 Nomos AI 5 Stanford University Center for Legal Informatics 6 MILA
# Abstract | 2307.13692#0 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 0 | # Is GPT a Computational Model of Emotion? Detailed Analysis
Ala N. Tak and Jonathan Gratch Institute for Creative Technologies University of Southern California Playa Vista, CA 90094, USA. [email protected], [email protected]
# Contents
1.1 Original prompts ........................................................................................................................... 2 1.2 Emotion derivation ........................................................................................................................ 2 1.3 Affect derivation ........................................................................................................................... 4 2.1 Original prompts ........................................................................................................................... 5 2.2 Prompt engineering ....................................................................................................................... 7 2.3 Alternative framing ..................................................................................................................... 10 2.4 Prompt structures ........................................................................................................................ 12 2.5 Additional data and graphs ......................................................................................................... 14 2.6 Affect derivation ......................................................................................................................... 16 | 2307.13779#0 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2308.02439 | 0 | 3 2 0 2
l u J 5 2 ] Y C . s c [ 1 v 9 3 4 2 0 . 8 0 3 2 : v i X r a
# A large language model-assisted education tool to provide feedback on open-ended responses
# Jordan K. Matelsky Richard D. Lange 1,2, Felipe Parodi 3, Tony Liu 4, 1,5, and Konrad P. Kording 1,3,4,6
1Department of Bioengineering, University of Pennsylvania; 2Research & Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory; 3Department of Neuroscience, University of Pennsylvania; 4Department of Computer Science, University of Pennsylvania; 5Department of Computer Science, Rochester Institute of Technology; 6CIFAR LMB Program | 2308.02439#0 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 1 | 3 2 0 2
l u J 6 2 ] L C . s c [
With the above challenges in mind, in this pa- per, we propose FACTOOL, a task and domain agnostic framework for detecting factual er- rors of texts generated by large language mod- els (e.g., ChatGPT). Experiments on four dif- ferent tasks (knowledge-based QA, code gen- eration, mathematical reasoning, and scien- tiï¬c literature review) show the efï¬cacy of the proposed method. We release the code of FACTOOL associated with ChatGPT plu- gin interface at https://github.com/ GAIR-NLP/factool.
Figure 1: Tool-augmented framework for factuality de- tection.
2 v 8 2 5 3 1 . 7 0 3 2 : v i X r a | 2307.13528#1 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 1 | 1 DuckAI 2 Georgia Tech 3 ETH Zürich 4 Nomos AI 5 Stanford University Center for Legal Informatics 6 MILA
# Abstract
Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
# Introduction | 2307.13692#1 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 1 | Abstract This paper investigates the emotional reasoning abilities of the GPT family of large language models via a component perspective. The paper first examines how the model reasons about autobiographical memories. Second, it systematically varies aspects of situations to impact emotion intensity and coping tendencies. Even without the use of prompt engineering, it is shown that GPT's predictions align significantly with human-provided appraisals and emotional labels. However, GPT faces difficulties predicting emotion in- tensity and coping responses. GPT-4 showed the highest performance in the initial study but fell short in the second, despite providing superior results after minor prompt engineering. This assessment brings up questions on how to effectively employ the strong points and address the weak areas of these models, par- ticularly concerning response variability. These studies underscore the merits of evaluating models from a componential perspective [1].
1
# 1. Study 1
# 1.1 Original prompts
GPT is sensitive to minor variations in prompt design [2]. To mitigate this, we adopt the strategy of Binz and Schulz to evaluate GPTâs cognitive reasoning capabilities [3]. We prompt the model (without any fine- tuning) with the exact question pattern used for human respondents in a psychological experiment, append- ing only the least required additional text to enable the model to produce uniform answers, like responding to Likert scales. Figure SM.1 is the exact prompt given to GPT in Study 1. | 2307.13779#1 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 1 | With advances in generative AI, there is now potential for autonomous agents to manage daily tasks via natural language commands. However, current agents are primarily created and tested in simplified synthetic environments, leading to a disconnect with real-world scenarios. In this paper, we build an environment for language-guided agents that is highly realistic and reproducible. Specifically, we focus on agents that perform tasks on the web, and create an environment with fully functional websites from four common domains: e-commerce, social forum discussions, collaborative software development, and content management. Our environment is enriched with tools (e.g., a map) and external knowledge bases (e.g., user manuals) to encourage human-like task-solving. Building upon our environment, we release a set of benchmark tasks focusing on evaluating the functional correctness of task completions. The tasks in our benchmark are diverse, long-horizon, and designed to emulate tasks that humans routinely perform on the internet. We experiment with several baseline agents, integrating recent techniques such as reasoning before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of | 2307.13854#1 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 1 | Open-ended questions are a favored tool among instructors for assessing student understand- ing and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to over- whelmed instructors and decreased feedback qual- ity. Many instructors resort to simpler question for- mats, like multiple-choice questions, which provide immediate feedback but at the expense of person- alized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool deliv- ers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With in- structor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies.
Large language models | Automated learning assessment | Automated grading | Education Correspondence: [email protected]
# Introduction | 2308.02439#1 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 2 | Figure 1: Tool-augmented framework for factuality de- tection.
2 v 8 2 5 3 1 . 7 0 3 2 : v i X r a
Content that is automatically generated can of- ten exhibit inaccuracies or deviations from the truth due to the limited capacity of large language models (LLMs) (Ji et al., 2023; Schulman, 2023). LLMs are susceptible to producing content that appears credible but may actually be factually in- correct or imprecise. This limitation restricts the application of generative AI in some high-stakes ar- eas, such as healthcare, ï¬nance, and law. Therefore, it is crucial to identify these errors systematically to improve the usefulness and reliability of the gen- erated content.
1
# 1 Introduction | 2307.13528#2 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 2 | # Introduction
In recent years, models such as GPT-3 [Brown et al., 2020], GPT-4 [OpenAI, 2023], PaLM [Chowdh- ery et al., 2022], and Chinchilla [Hoffmann et al., 2022] have shown increasing performance across a wide variety of natural language tasks ranging from translation to reasoning [Bubeck et al., 2023, Laskar et al., 2023]. This rapid progress has been closely tracked and assessed by evaluating LLMs on benchmarks, which test model capabilities on a set of standardized problems. The GLUE benchmark [Wang et al., 2019b] for language understanding was first released in April 2018; but models such as BERT [Devlin et al., 2019] and GPT-2 [Radford et al., 2019] in the following year were already powerful enough to necessitate the âSuperGLUEâ benchmark [Wang et al., 2019a]. Since then, the race between language models and benchmarks has increasingly favored the former. | 2307.13692#2 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 2 | Each prompt is provided in a separate conversation, and the text in red is replaced with each story. This was run once for each story. The model occasionally returned additional explanatory text along with each response which was ignored for analysis. The model always returned four emotion labels. The appraisal items are verbatim from Smith and Lazarus 1990 [4].
Imagine the following hypothetical situation and answer the following questions. This is the situation: âMy grandfather passed away a few weeks prior owing to health complications. Processing the event in the absence of family was a tall task. The fact that he was doing well a few months back when I was leaving for the USA and the sudden demise were things which compounded the impact of the event. It took a few weeks for me to return to state of normalcy, process the entire event and accepting it.â Please rate the situation along the following criteria from 1 to 9. Only provide a number
1. Relevance: How important was what was happening in the situation to you? (1: not at all ⦠9: extremely)
2. Congruence: Think about what you did and did not want in this situation. How consistent was the situation with what you wanted? (1: not at all ⦠9: extremely)
3. Self-accountability: To what extent did you consider YOURSELF responsible for the situation? (1: not at all ⦠9: extremely) | 2307.13779#2 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 2 | before acting. The results demonstrate that solving complex tasks is challenging: our best GPT-4-based agent only achieves an end-to-end task success rate of 14.41%, significantly lower than the human performance of 78.24%. These results highlight the need for further development of robust agents, that current state-of-the-art large language models are far from perfect performance in these real-life tasks, and that WebArena can be used to measure such progress. Our code, data, environment reproduction resources, and video demonstrations are publicly available at https://webarena.dev/. | 2307.13854#2 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 2 | Large language models | Automated learning assessment | Automated grading | Education Correspondence: [email protected]
# Introduction
Open-ended questions â questions that require students to produce multi-word, nontrivial responses in educational en- â are a popular assessment tool vironments because they offer students the chance to explore their understanding of learning material. Such questions provide valuable insight into studentsâ grasp of complex concepts and their problem-solving ap- proaches. However, grading open-ended questions can be time-consuming, subjective, and â especially in the
case of large class sizes â prone to attentional errors. These factors create a critical bottleneck in precision education.
Large Language Models (LLMs) present an op- portunity to automate and promote equity in learning assessments, providing rapid valuable feedback to stu- dents while reducing the burden on instructors. We developed a tool that automatically assesses studentsâ responses to open-ended questions by evaluating their responses against a set of instructor-defined criteria. To use our tool, the instructor poses a question along with optional grading criteria. Students respond to these questions, and their answers are relayed to a server. The responses are paired with the grading cri- teria (which are not revealed to the student), forming a payload for a large language model (LLM). The LLM then generates automated feedback, suggesting areas for improvement to the student. | 2308.02439#2 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 3 | 1
# 1 Introduction
Generative artiï¬cial intelligence (AI) technology, exempliï¬ed by GPT-4 (OpenAI, 2023) consoli- dates various tasks in natural language process- ing into a single sequence generation problem. This uniï¬ed architecture enables users to complete multiple tasks (e.g., question answering (Thop- pilan et al., 2022), code generation (Chen et al., 2021), math problem solving (Lewkowycz et al., 2022), and scientiï¬c literature generation (Taylor et al., 2022)) through a natural language inter- face (Liu et al., 2023) with both unprecedented performance (Bubeck et al., 2023) and interactiv- ity.
Current literature on detecting and mitigating factual errors generated by machine learning mod- els focuses predominantly on a single speciï¬c task, for example, retrieval-augmented veriï¬cation mod- els for QA (Lewis et al., 2020), hallucination detec- tion models for text summarization (Fabbri et al., 2022), and execution-based evaluation for code (Shi et al., 2022). While these methods have proven successful within their respective areas, given the remarkable versatility of tasks and domains han- dled by LLMs, we argue that it is also important | 2307.13528#3 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 3 | Scaling up, model sizes and datasets alike, has led to rapid improvements on various natural language tasks on benchmarks like BIG-bench [Srivastava et al., 2022] and HELM [Liang et al., 2022]. Neural scaling laws [Kaplan et al., 2020, Caballero et al., 2023, Alabdulmohsin et al., 2022] have been used to predict the behavior of large scale models on various metrics. Nevertheless, LLM performance often increases unpredictably [Wei et al., 2022a], especially on tasks that require reasoning abilities. Predictions of performance on ML benchmarks often underestimate the rate of progress [Steinhardt, 2022]. Since progress has been faster than anticipated, new benchmarks need to be more difficult.
Email: [email protected]. â¡â¡Email: [email protected].
Models such as ChatGPT have shown the ability to pass entry-level examinations in fields such as law [Bommarito II and Katz, 2022], medicine [Kung et al., 2023], economics [Caplan, 2023], and mathematics [Shakarian et al., 2023]. Nevertheless, LLM understanding of many fields is reportedly shallow and unreliable [Shapira et al., 2023]. Expert reasoning in domains with specialized knowledge is essential for automated systems to augment skilled professionals [Noy and Zhang, 2023]. | 2307.13692#3 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 3 | 3. Self-accountability: To what extent did you consider YOURSELF responsible for the situation? (1: not at all ⦠9: extremely)
4. Other-accountability: To what extent did you consider SOMEONE ELSE responsible for the situation? (1: not at all ⦠9: extremely)
5. Future-expectancy: Think about how you wanted this situation to turn out. How consistent with these wishes did you expect the situation to become (or stay)? (1: not at all ⦠9: extremely) 6. Problem-focused coping: Think about what you did and didnât want in this situation. How cer- tain were you that you would be able to influence things to make (or keep) the situation the way you wanted it? (1: certainly WILL not be able ⦠certainly WILL be able)
7. Accommodative-focused coping: How certain were you that you would be able to deal emotion- ally with what was happening in this situation? (1: not able to cope ⦠9: completely able to cope)
8. Finally, please list at most four emotions someone in this situation is likely to feel.
# Figure SM.1: Prompt used in Study 1.
# 1.2 Emotion derivation | 2307.13779#3 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 3 | # INTRODUCTION
Autonomous agents that perform everyday tasks via human natural language commands could significantly augment human capabilities, improve efficiency, and increase accessibility. Nonetheless, to fully leverage the power of autonomous agents, it is crucial to understand their behavior within an environment that is both authentic and reproducible. This will allow measurement of the ability of agents on tasks that human users care about in a fair and consistent manner.
Current environments for evaluate agents tend to over-simplify real-world situations. As a result, the functionality of many environments is a limited version of their real-world counterparts, leading to a lack of task diversity (Shi et al., 2017; Anderson et al., 2018; Gordon et al., 2018; Misra et al., 2016; Shridhar et al., 2020; 2021; Yao et al., 2022a). In addition, these simplifications often lower the complexity of tasks as compared to their execution in the real world (Puig et al., 2018; Shridhar et al., 2020; Yao et al., 2022a). Finally, some environments are presented as a static resource (Shi et al., 2017; Deng et al., 2023) where agents are confined to accessing only those states that were previously cached during data collection, thus limiting the breadth and diversity of exploration. Dor evaluation, many environments focus on comparing the textual surface form of the predicted
# âLead contributors. â Equal contribution.
1
Under review | 2307.13854#3 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 3 | Here, we describe the technical design of our tool, FreeText, and showcase its utility in educational envi- ronments spanning topics and complexity. We further outline the implications of our work for teaching com- plex subjects, and the potential role of large language models in education (Fig. 1). We share our source code and a public URL (see Supplemental Materials), allow- ing educators to experiment with FreeText firsthand.
. . SUPERHUMAN @ Multiple Choice GRADING Heuristics Autograders @ Faster technology LLM Autograders Better prompts Throughput Human graders @ Feedback Quality
Figure 1. Sketch comparing grading throughput and quality of feed- back to students among various assessment methodologies The y -axis represents throughput (i.e., rapidity of feedback generation and number of assignments evaluated per real-world unit-time or cost), and the x-axis represents feedback quality (a qualitative measure of personalization and detail of feedback given to students). LLMs have the potential to fill a niche among educational tools by striking a balance between quantity and quality, delivering high throughput with feedback quality comparable to hu- man graders. Improvements in technology (faster GPU cards, better LLM architectures) will continue to push throughput upward, and improvements in prompt design (or other domain-specific adaptations) will improve the quality of LLM-generated feedback. | 2308.02439#3 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 4 | However, at the same time, such a generative paradigm also introduces some unique challenges.
âCorresponding author
Response Claim Evidence Scenario Methods Length Generated by Granularity Provided Provided Domain Task FEVER-based 7.30 Human Fact v xX Wikipedia Fact Verification FactCC 20.83 Synthetic Sentence v v Newswire Summ. Factuality QAGS-based 16.11 Model Summary v v Newswire Summ. Factuality WICE-based 24.20 Human Fact v v Wikipedia â Entailment RARR - PaLM/LaMDA Fact xX X Wikipedia QA 41.80 ChatGPT Fact xX xX Wikipedia QA FACTOOL 30.37 ChatGPT Snippet x x Python Code generation 67.13 ChatGPT Statement xX xX Math Math Problems 76.34 ChatGPT Tuple x x Sci. text Sci. Review
Table 1: A comparison of published approaches for factuality detection in terms of generated responses and claims to be veriï¬ed based on collected evidence. âScenarioâ represents which task and domain the corresponding ap- proach has been justiï¬ed. âSci.â represents âScientiï¬câ.
to have a more comprehensive factuality detection and veriï¬cation framework that is similarly versa- tile. | 2307.13528#4 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 4 | In this paper, we introduce a new benchmark dataset, ARB (Advanced Reasoning Benchmark), designed to evaluate expert reasoning abilities in mathematics, physics, chemistry, biology, and law. To make the benchmark more challenging than previous benchmarks, we extract graduate-level tasks from resources intended for domain professionals. The performance of current models such as GPT-4 on the quantitative parts of ARB is very low using standard prompting methods.
Our dataset offers improvements over existing benchmarks:
⢠Hundreds of problems requiring expert reasoning in quantitative subjects, where LLMs are known to underperform;
⢠A large percentage of the problems are short-answer and open response questions, in contrast to the multiple-choice questions that dominated earlier benchmarks.
In addition, we propose an automated rubric-based method allowing self-evaluation of intermediate reasoning steps. While not currently a substitute for human evaluation, rubrics generated by GPT-4 have good coverage, and self-evaluation scores track human grading surprisingly well.
We provide the instructions to access the dataset in the supplementary material.
# 2 Related Work | 2307.13692#4 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 4 | 8. Finally, please list at most four emotions someone in this situation is likely to feel.
# Figure SM.1: Prompt used in Study 1.
# 1.2 Emotion derivation
Human participants offered from one to eight emotional labels for their stories (M=2.31, SD=1.39). GPT- 3.5 and GPT-4 always returned four labels. We explored two general approaches for comparing these labels. First, as reported in the paper [5], we converted labels into valence, arousal, and dominance scores. The results in the paper use a dictionary-based method as people reported very common emotion terms like joy, anger, or disappointment. We also complement this with an embedding approach summarized here. Second,
2
we examined if one of the words output by GPT was an exact match for one of the words provided by the participant, where different grammatical forms of the identical word were considered a match (e.g., angry matches anger, but fear does not match scared). Interestingly, the first word reported by GPT was the best match, suggesting that the first word provided by the model is its best guess.
The dictionary results are reported in the paper. Here we report the embedding and word-match results.
1.2.1 Embedding results | 2307.13779#4 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 4 | # âLead contributors. â Equal contribution.
1
Under review
Figure 1: WebArena is a standalone, self-hostable web environment for building autonomous agents. WebArena creates websites from four popular categories with functionality and data mimicking their real-world equivalents. To emulate human problem-solving, WebArena also embeds tools and knowledge resources as independent websites. WebArena introduces a benchmark on interpreting high-level realistic natural language command to concrete web-based interactions. We provide annotated programs designed to programmatically validate the functional correctness of each task.
action sequences with reference action sequences, disregarding the functional correctness of the executions and possible alternative solutions (Puig et al., 2018; Jernite et al., 2019; Xu et al., 2021; Li et al., 2020; Deng et al., 2023). These limitations often result in a discrepancy between simulated environments and the real world, and can potentially impact the generalizability of AI agents to successfully understand, adapt, and operate within complex real-world situations. | 2307.13854#4 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 4 | Matelsky et al. | Körding Lab | August 7, 2023 | 1â7
# Related Work
Automated grading is a longstanding pursuit in the field of education technology. Early automated grading tools focused on âsolvableâ tasks like math or program- ming assignments, where grading generally relies on unit tests or direct output comparisons (Hollingsworth, 1960; Ureel II and Wallace, 2019; Orr and Russell, 2021; Messer et al., 2023). These approaches often overlook less easily-quantified but nonetheless critical indicators of learning and understanding, such as design quality, code maintainability, or potential areas of stu- dent confusion. Modern tools, like AutoGrader, which provides real-time grading for programming exercises, remain narrowly focused on output correctness and do not sufficiently account for documentation or maintain- ability (Liu et al., 2019). | 2308.02439#4 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 5 | to have a more comprehensive factuality detection and veriï¬cation framework that is similarly versa- tile.
Additionally, in the current literature, the task of factuality detection is usually simpliï¬ed as ei- ther (i) given a claim, determining whether it is factually correct, (ii) or given evidence, determin- ing whether the generated claim is supported. This task deï¬nition is not well suited to writing tasks that users commonly engage with when interacting with generative models (e.g., ChatGPT), where we often need to validate the factuality of a long-form generation without explicit claims and evidence.
⢠We connect the concept of âtool useâ with âfac- tuality detectionâ, developing a uniï¬ed and ver- satile framework for factuality detection across a variety of domains and tasks.
⢠We use FACTOOL to evaluate the factuality of modern chatbots, and found that GPT-4 has the best factuality across almost all scenarios. Su- pervisely ï¬ne-tuned chatbots (Vicuna-13B) have reasonably good factuality in KB-based QA but perform poorly in more challenging scenarios, in- cluding code generation, math problem solving, and scientiï¬c literature review writing. | 2307.13528#5 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 5 | We provide the instructions to access the dataset in the supplementary material.
# 2 Related Work
Improving the reasoning capabilities of LLMs has been a subject of recent interest, with a particular focus on advanced prompting techniques [Wei et al., 2022b, Kojima et al., 2023, Wang et al., 2023, Yao et al., 2023, Nye et al., 2021]. Such techniques have seen increasingly successful applications in solving reasoning problems involving commonsense reasoning and mathematics, by promoting active reasoning processes within the LLMs before yielding final answers.
Model architectures such as Minerva [Lewkowycz et al., 2022] have exemplified the enhancement of reasoning capabilities through fine-tuning on extensive datasets covering math and reasoning tasks. This has yielded improved performance across several benchmarks, including MATH [Hendrycks et al., 2021], GSM8K [Cobbe et al., 2021], and MMLU [Hendrycks et al., 2020]. Concurrently, other lines of research [Li et al., 2023, Lightman et al., 2023, Cobbe et al., 2021] have investigated the application of verification techniques to augment and enhance LLM performance. | 2307.13692#5 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 5 | The dictionary results are reported in the paper. Here we report the embedding and word-match results.
1.2.1 Embedding results
We approach this problem using word embeddings, such as those provided by Word2Vec, combined with distance/similarity metrics, such as cosine similarity. Word embeddings represent words in a multi-dimen- sional space and are generated in such a way that similar words are close to each other in this space. We first take each pair of emotion labels, calculate their word vectors (using Word2Vec [6]), and then measure the cosine similarity between the vectors. Our analysis reveals an average general similarity of approxi- mately 0.66 and 0.50 across all comparisons using GPT-3.5 and GPT-4 output, respectively, indicating moderate-to-strong similarity. This approach assumes that similar word embeddings would have similar emotional content, which is a simplification. Word embeddings capture many facets of a wordâs meaning, which includes but is not limited to its emotional content. As a result, while the cosine similarity of word embeddings can serve as a rough proxy for emotional similarity, it will not fully capture the valence and arousal dimensions. | 2307.13779#5 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 5 | We introduce WebArena, a realistic and reproducible web environment designed to facilitate the development of autonomous agents capable of executing tasks (§2). An overview of WebArena is in Figure 1. Our environment comprises four fully operational, self-hosted web applications, each representing a distinct domain prevalent on the internet: online shopping, discussion forums, collaborative development, and business content management. Furthermore, WebArena incorporates several utility tools, such as map, calculator, and scratchpad, to best support possible human-like task executions. Lastly, WebArena is complemented by an extensive collection of documentation and knowledge bases that vary from general resources like English Wikipedia to more domain-specific references, such as manuals for using the integrated development tool (Fan et al., 2022). The content populating these websites is extracted from their real-world counterparts, preserving the authenticity of the content served on each platform. We deliver the hosting services using Docker containers with gym-APIs (Brockman et al., 2016), ensuring both the usability and the reproducibility of WebArena. | 2307.13854#5 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 5 | Assessing studentsâ understanding from natural language responses, however, presents different chal- lenges and has seen significant evolution. Early Au- tomated Short Answer Grading (ASAG) models em- ployed statistical or domain-specific neural network ap- proaches (Heilman and Madnani, 2013; Riordan et al., 2017; Sung et al., 2019). In recent years, LLMs have been shown to outperform domain-specific language models (Radford et al., 2019; Mizumoto et al., 2019; Brown et al., 2020; Chung et al., 2022). LLMs fa- cilitate grading of open-ended assignment responses, without the need for task-specific fine-tuning (Cao, 2023; Mizumoto and Eguchi, 2023; Yoon, 2023). How- ever, Kortemeyer (2023) revealed that while LLMs like GPT-4 could be useful for preliminary grading of introductory physics assignments, they fell short for natural-language responses required in comprehensive exam grading. Further, while LLMs like GitHub Copilot streamline the process of code generation and review, they can fall short on more nuanced programming tasks and open-ended evaluation (Finnie-Ansley et al., 2022). Thus, in their current state, LLMs should be treated as a useful but fallible tool, with final assessments still in the hands of (human) instructors. | 2308.02439#5 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 6 | In this paper, we propose a task and domain- agnostic framework, FACTOOL, which aims to de- tect factual errors in LLM-generated texts. We il- lustrate our framework in Fig. 1, where we connect the concept of âtool useâ (Thoppilan et al., 2022; Gao et al., 2022b; Schick et al., 2023) with âfac- tuality detectionâ and demonstrate that the ability to use tools in LLMs is crucial for factuality de- tection. Speciï¬cally, FACTOOL leverages various tools, including Google Search, Google Scholar, code interpreters, Python, or even LLMs them- selves, to gather evidence about the factuality of the generated content. Moreover, our framework employs the reasoning abilities of LLMs to assess the factuality of the content, given the evidence that has been gathered. We develop a benchmark and perform experiments across four tasks: knowledge- based QA, code generation, math problem solving, and scientiï¬c literature review writing. In summary, our contributions are:
# 2 Related Work | 2307.13528#6 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 6 | Most of the aforementioned work has typically evaluated techniques against math benchmarks (e.g., GSM8K [Cobbe et al., 2021], MATH [Hendrycks et al., 2021], SVAMP [Patel et al., 2021], ASDiv [Miao et al., 2020], AQuA [Ling et al., 2017], MAWPS [Koncel-Kedziorski et al., 2016], MultiArith [Roy and Roth, 2016]) and commonsense reasoning tasks (e.g., CSQA [Talmor et al., 2018], StrategyQA [Geva et al., 2021], HotpotQA [Yang et al., 2018]). Recently, several new benchmarks have been introduced for reasoning and planning tasks, such as the GPT-Planning Benchmark [Valmeekam et al., 2023], ALERT Reasoning Benchmark [Yu et al., 2022], JEEBench [Arora et al., 2023]), and [Gendron et al., 2023]. Additionally, comprehensive evaluation suites like the Chain-of-Thought Hub [Fu et al., 2023] have been proposed. | 2307.13692#6 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 6 | To discover certain âdirectionsâ in the word embedding space that seem to correspond to particular semantic differences (i.e., emotional content), we projected word vectors onto the âVADâ dimension in Word2Vec and compared the labels in terms of this projection. However, Word2Vec does not inherently have an in- terpretable VAD dimension. Thus, we identified pairs of words that differ mainly in terms of V (or A, D) and subtracted their vectors to find the difference vectors. We average these difference vectors to find a vector that roughly points in the âVâ (or A, D) direction in the word embedding space. Finally, we computed the correlation between the projections of GPT and human labels to the generated VAD directions, which is presented in Table SM.1.
Table SM.1 Correlation with human-reported emotion Models Valence Arousal Dominance GPT-3.5 r = 0.793, p < .001*** r = 0.690, p < .001*** r = 0.337, p=.044 GPT-4 r = 0.779, p < .001*** r = 0.532, p < .001*** r = 0.026, p=.881 | 2307.13779#6 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 6 | Along with WebArena, we release a ready-to-use benchmark with 812 long-horizon web-based tasks (§3). Each task is described as a high-level natural language intent, emulating the abstract language usage patterns typically employed by humans (Bisk et al., 2019). Two example intents are shown in the upper left of Figure 1. We focus on evaluating the functional correctness of these tasks, i.e., does the result of the execution actually achieve the desired goal (§3.2). For instance, to evaluate the example in Figure 2, our evaluation method verifies the concrete contents in the designated repository. This evaluation is not only more reliable (Zhong et al., 2017; Chen et al., 2021; Wang et al., 2022) than comparing the textual surface-form action sequences (Puig et al., 2018; Deng et al., 2023) but also accommodate a range of potential valid paths to achieve the same goal, which is a ubiquitous phenomenon in sufficiently complex tasks. | 2307.13854#6 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 6 | It is also important to consider how students per- ceive AI graders and how automated graders are de- ployed to educational settings (Burrows et al., 2015; Saha et al., 2019; Zhu et al., 2022). Many comment on the socio-technical dynamics of automated grading, including the potential for introduction of machine bias (e.g., Hsu et al. (2021)). The use of NLP for short answer grading is not a trivial task and has been set as an evaluation challenge in its own right (Dzikovska et al., 2013).
To address the evolving needs of grading open- ended responses, our framework proposes four key en- hancements. First, it is specifically designed for open- ended questions, which are not typically well-served by the rubric-based grading of most ed-tech tools. SecMatelsky et al. | Harnessing large language models for education
ond, our system leverages LLMs to deliver rapid, per- sonalized feedback for student responses without ex- plicitly attempting to produce a quantitative grade. Third, our framework introduces a feedback loop to continually improve instructor-provided prompts, ques- tion suggestions, and grading criteria. Lastly, our tool integrates with the Jupyter Notebook environment, ex- tensively utilized in fields such as computer science, data science, and statistics.
# Approach | 2308.02439#6 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 7 | # 2 Related Work
Factuality Detection in Natural Language Pro- cessing Factuality detection was a topic of rig- orous study even before the advent of generative AI. Existing works can be organized by their dif- ferences in terms of the âresponseâ to be veri- ï¬ed, the âclaimâ extracted from the response, and supporting âevidenceâ. As illustrated in Tab. 1, the creation of the FEVER dataset (Thorne et al., 2018a) spawned models (Zhong et al., 2020; Kr- ishna et al., 2022) that determine whether a given ï¬ne-grained claim made based on Wikipedia1 arti- cles is correct. In this task setting, both the claim and related evidence are given. FactCC (Kryscin- ski et al., 2020) and QAGS-based models (Wang et al., 2020) adopted different task formulations to detect factual consistency, i.e., given the ev- idence text, and the goal is to determine if the generated summaries or summary sentences are factually consistent with the given text. WICE- based methods (Kamoi et al., 2023) decide if a fact from a Wikipedia sentence could be supported
⢠We revisit the task of factuality detection and extend it in a way that allows for a better audit of current generative AI models. | 2307.13528#7 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 7 | Despite their utility, existing benchmarks are limited in difficulty, represent a restricted range of reasoning challenges, and do not necessarily mirror real-world tasks demanding complex reasoning. Moreover, recent advancements such as Minerva [Lewkowycz et al., 2022] have revealed that these benchmarks may not offer sufficient challenge.
The rapid progress in LLM capabilities has led many to explore using LLMs in the LLM evaluation pipeline. Apart from using LLMs to generate evaluation tasks [Zhang et al., 2022, Perez et al., 2022], LLMs have increasingly been used as a proxy for human evaluation [Chiang and Lee, 2023, Liu et al., 2023, Fu et al., 2023, Kocmi and Federmann, 2023]. Useful LLM-based evaluation for alignment has been done using rubrics [Bai et al., 2022]. We explore the efficacy of rubrics for evaluation when applied to highly complex math and physics problems.
2
# 3 Benchmark
The key considerations when building a machine learning benchmark are:
⢠Difficulty. Most tasks have to be out of reach of current models; a benchmark where many
models score over 95% is not useful for tracking differential AI development. ⢠Usefulness. The tested skills should correlate with generally useful human skills. ⢠Ease of evaluation. It should be straightforward for the model creators to compare the
performances of different models. The scores should be interpretable. | 2307.13692#7 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 7 | It should be noted that this method assumes that the difference vectors capture the semantic difference between words as intended, which is not always true. Also, we assume that the âVâ (or A, D) dimension is orthogonal to the other dimensions in the word embedding space, which may not be the case. Lastly, the choice of word pairs can greatly affect the resulting VAD vectors.
1.2.2 Word-match results
Table SM.2 lists how often a GPT-provided label matches one of the human-provided emotion labels. This is broken out by the order of words produced by the model. For example, the first label provided by GPT- 3.5 matched one of the human-provided labels for a given story 42.9% of the time. The second label only matched 34.3% of the time, and so forth. Overall, at least one of the labels matched at least one of the human responses 80% of the time. GPT-4 was slightly less accurate than GPT-3.5 on this metric, but this difference failed to reach significance: Ï2 (1, N = 35) = 0.8, p = .771.
3 | 2307.13779#7 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 7 | We use this benchmark to evaluate several agents that can follow NL command and perform web- based tasks (§4). These agents are implemented in a few-shot in-context learning fashion with powerful large language models (LLMs) such as GPT-4 and PALM-2. Experiment results show that the best GPT-4 agent performance is somewhat limited, with an end-to-end task success rate of only 14.41%, while the human performance is 78.24%. We hypothesize that the limited performance of current LLMs stems from a lack of crucial capabilities such as active exploration and failure recovery to successfully perform complex tasks (§5.2). These outcomes underscore the necessity for further development towards robust and effective agents (LeCun, 2022) in WebArena.
2
Under review
# Under review
Figure 2: A high-level task that can be fully executed in WebArena. Success requires sophisticated, long-term planning and reasoning. To accomplish the goal (top), an agent needs to (1) find Pittsburgh art museums on Wikipedia, (2) identify their locations on a map (while optimizing the itinerary), and (3) update the README file in the appropriate repository with the planned route.
# 2 WE BAR E N A: WEBSITES AS AN ENVIRONMENT FOR AUTONOMOUS AGENTS | 2307.13854#7 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 7 | # Approach
We have designed our tool for use in a variety of ed- ucational contexts, ranging from primary school educa- tion to graduate courses. FreeText enables educators to integrate open-ended questions into their curriculum without incurring an instructor labor cost. This allows students to gain rapid, individualized, and sophisticated feedback, thereby creating a highly effective learning loop that can enhance the absorption of course ma- terials. It guides students in refining their responses, enhancing their understanding and application of con- cepts in each iteration. This feedback is generated by a large language model (LLM), which circumvents the attentional errors often made by human graders, par- ticularly when assessing a large volume of assignments. The LLM is capable of delivering intricate responses to students swiftly, as demonstrated by the examples provided in Table 1.
Our software is packaged as a Python library. LLM interactions are handled by the Guidance Python pack- age (Microsoft, 2023). User interfaces and a JSON HTTP API are supported by FastAPI (Lathkar, 2023). We support traditional (e.g., JSON files, SQLite) as well as cloud-based data storage drivers. Our server can be run at low financial and computational cost through the combination of serverless deployment (e.g., to AWS Lambda) and serverless databases (e.g., AWS DynamoDB). Student responses are not stored by Free- Text infrastructure by default. | 2308.02439#7 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 8 | ⢠We revisit the task of factuality detection and extend it in a way that allows for a better audit of current generative AI models.
1https://www.wikipedia.org/
by provided evidence. RARR (Gao et al., 2022a) proposed a new approach by directly prompting LLMs to generate queries, retrieve evidence and determine factuality.
Existing works typically rely on either a given claim or given evidence and target a speciï¬c use case. However, in this paper, we introduce a more challenging yet practical task setting, i.e., factuality detection without explicit claims or evidence, and propose a framework capable of addressing this challenge in a variety of scenarios. | 2307.13528#8 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 8 | performances of different models. The scores should be interpretable.
⢠Minimizing data contamination. A consistent issue with popular benchmarks is that the recent LLMs contain some tasks in their training data [OpenAI, 2023]. This leads to overestimation of true model capabilities.
⢠Connection to general capabilities. If a model is trained on data similar to the benchmark, it is possible it achieves high performance without generalization or âintelligenceâ, failing to solve novel tasks of similar difficulty [Chollet, 2019]. Conversely, problems should not be pathological or overly adversarial, to avoid the dangers of underclaiming [Bowman, 2021].
# 3.1 Formatting
The benchmark consists of three types of questions: multiple choice, short answer, and open response, in descending order of proportion in the dataset.
⢠Multiple choice questions consist of a question and four to five possible answers, and the correct answer is the one that best answers the question. They were sourced from standardized tests, such as the MCAT and bar exam prep, and make up a large proportion of the dataset due to their ease of grading. | 2307.13692#8 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 8 | 3
Table SM.2 Position of GPT-reported label Model First Second Third Fourth Any GPT-3.5 0.429 0.343 0.257 0.171 0.800 GPT-4 0.371 0.343 0.314 0.114 0.771
# 1.3 Affect derivation
Appraisal derivation considers which appraisals predict specific emotions. As people reported multiple emotion labels, we predict the average valence, arousal, and dominance scores associated with each story. Thus, we performed backward linear regression separately to predict average valence, average arousal, and average dominance. This is first performed on human data and then on model data. Figure 5 illustrates the results for GPT4. Figure SM.2 shows the results for GPT3.5.
Appraisal theory claims the valence of responses is dictated by if the situation is goal-congruent. This is indeed the association found in the human data but GPT-3 primarily associates valence with future-expec- tancy (which refers to if the situation unfolded as expected). Through post hoc analysis, this seems to arise due to collinearities between GPT-3âs interpretation of goal-congruence and future expectancy that are less present in human ratings. | 2307.13779#8 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 8 | # 2 WE BAR E N A: WEBSITES AS AN ENVIRONMENT FOR AUTONOMOUS AGENTS
Our goal is to create a realistic and reproducible web environment. We achieve reproducibility by making the environment standalone, without relying on live websites. This circumvents technical challenges such as bots being subject to CAPTCHAs, unpredictable content modifications, and configuration changes, which obstruct a fair comparison across different systems over time. We achieve realism by using open-source libraries that underlie many in-use sites from several popular categories and importing data to our environment from their real-world counterparts.
2.1 CONTROLLING AGENTS THROUGH HIGH-LEVEL NATURAL LANGUAGE | 2307.13854#8 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 8 | Any Guidance-compatible LLM may be swapped into the Freetext server. That is, by default we access LLMs through the OpenAI API, but it is easy to swap in locally hosted or fine-tuned models: thus, privileged or sensitive information may be kept to on-premise com- pute resources, or users may opt to change which API- based LLM is accessed. For example, a more powerful LLM may be selected in cases where course content is particularly complex, or a simpler model may be used for more elementary course content.
One front-end that students can access is a Jupyter Notebook widget, developed using IPyWidgets (Kluyver et al., 2016), making it easy to incorporate language short-answer questions as part of a natural notebook-based active-learning environment.
The widget communicates with the backend | 2308.02439#8 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 9 | Tool use in Large Pretrained Language Models Language models store limited knowledge within their parameters. To overcome this limitation, vari- ous tools have been introduced to assist language models in order to further expand their capabili- ties. For example, Press et al. (2022); Komeili et al. (2022) gathered information from the Internet to enhance question answering and dialog systems, respectively. Schick et al. (2023) trained a model capable of interacting with ï¬ve tools including a calculator, a translation system, etc. Recently, Shen et al. (2023) introduced a framework that employs LLMs to connect various AI models from the ma- chine learning communities to tackle AI tasks. Fur- thermore, Liang et al. (2023) proposed a new AI ecosystem that connects LLMs with millions of existing APIs to accomplish tasks. In this work, we explore tool use in LLMs for the task of factuality detection.
# 3 Revisiting Factuality in Generative AI
# 3.1 Deï¬nition | 2307.13528#9 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 9 | Short answer questions, on the other hand, ask for final answers in the format of a short phrase or mathematical expression. They were sourced from problem books such as Souza and Silva [2008], Gelca and Andreescu [2017], and physics book series Lim and Qiang [2001], Lim [2007], Lim [1998], Lim et al. [2019], and Lim [1996]. We generally avoided algebraic expressions, because of technical difficulties in the grading process. A given algebraic expression may have several equivalent forms (e.g. nontrivial functional relations for the functions appearing in the final answer), and a grading scheme which accounts for all possible variations across our entire dataset is not feasible. Moreover, physics problems often require answers introducing new notation that is not explicitly mentioned in the problem statement.
⢠Open response questions are more challenging: they consist of a question and a blank space for the answer. They were sourced from problem books and exams, such as the Harvard PhD comprehensive exams in mathematics [Harvard University, 2021]. Such tasks require manual grading. These questions are aspirational in nature, as current systems (e.g. ChatGPT) cannot produce satisfactory responses, even for the âelementaryâ problems.
# 3.2 Mathematics | 2307.13692#9 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 9 | Appraisal theory claims arousal should largely be determined by the relevance of the event to the individual (e.g., a threat to a very important goal would be more relevant than a threat to a minor goal). This is indeed the association found in the human data, but GPT associates arousal with other-accountability, though it should be noted that both associations are weak.
Finally, appraisal theory claims dominance should be associated with perceptions of control (positively associated with problem-focused coping and negatively associated with emotion-focused coping). Neither of these associations was found in either model. Self-reported dominance was associated with goal-congru- ence, which makes some sense as people are presumably more in control in positive situations. GPT-3 associates dominance with future expectancy, likely for the same reasons it uses this feature for valence.
7 , R2=,074, 14 Ji ! 01 Problem-focused Arousal 18 -.01 Dominance R?=.732, p<.001 Dominance R2=,.493, p<.001 Self-reported Emotion Valance R?=.793, p<.001 Emotion-focused GPT-3 Predicted Emotion
Figure SM.2: Appraisal derivation derived from human data (left of figure) and GPT3.5 (right).
4
# 2. Study 2
# 2.1 Original prompts
2.1.1 Prompt | 2307.13779#9 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 9 | 2.1 CONTROLLING AGENTS THROUGH HIGH-LEVEL NATURAL LANGUAGE
The WebArena environment is denoted as E= â¨S, A, O, T â© with state space S, action space A (§2.4) and observation space O (§2.3). The transition function T : S à Aââ S is deterministic, and it is defined by the underlying implementation of each website in the environment. Performing a task described by a natural language intent i can be formulated as a partially observable Markov decision process (POMDP): at each time step t, an agent issues an action atâ A given the partial observation otâ O. Consequently, the action results in a new state st+1â S and its corresponding observation ot+1â O. We propose a reward function r(a, s) to measure the success of a task execution, where a represents the sequence of actions, and s denotes all intermediate states. This reward function assesses if state transitions align with the expectations of the intents. For example, with an intent to place an order, it verifies whether an order has been placed. Additionally, it evaluates the accuracy of the agentâs actions, such as checking the correctness of the predicted answer.
2.2 WEBSITE SELECTION | 2307.13854#9 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 9 | Körding Lab | 2
Python server described above. The widget is de- signed to be easily integrated into lecture and home- work notebooks, enabling instructors to easily enrich existing teaching materials. A distinctive feature of our system is the intermediary server which equips the large language model with âheld-outâ information, such as a rubric for correct responses, accessible only to the LLM and instructor, and not to the student. This establishes the useful informational asymmetry between the evalu- ator and the student.
To include the widget in a Python environment, the instructor can include the following code:
!pip install freetext_jupyter from freetext_jupyter import FreetextWidget
FreetextWidget(
# This ID is generated by the instructor. "07b2c3ef-0f97-46bc-a11e-..."
)
When executed in a Jupyter notebook cell, this code will access the HTTP API to replace the widget with the corresponding question text for the student. Upon encountering the widget in a notebook, the stu- dent is presented with an open-ended question accom- panied by a text box for response input. When they submit their response, the system transmits it to the server for combination with the feedback criteria set by the instructor. | 2308.02439#9 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 10 | # 3 Revisiting Factuality in Generative AI
# 3.1 Deï¬nition
Versatile Factuality In most previous works, factuality has been deï¬ned as whether a claim in a text can be supported by evidence from a separate, trustworthy knowledge base, with applications in fact-checking (Thorne et al., 2018b) (where the knowledge base is a large source like Wikipedia) and summarization (Kryscinski et al., 2020) (where the knowledge base is an input document or doc- uments). In this paper, we extend this deï¬nition to whether the claims made in generated signals (which could be text, code, or mathematical ex- pressions and so on) can be supported by evidence under speciï¬c rules. Speciï¬cally, these rules can range from consistency with a knowledge base de- rived from Wikipedia, to a veriï¬cation rule spec- iï¬ed within a Python library, or an operational rule derived from mathematics. By adopting this
broader deï¬nition, we are able to establish a uni- ï¬ed framework for addressing factuality issues in generative AI beyond just the textual domain. | 2307.13528#10 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 10 | # 3.2 Mathematics
This part of the dataset is the most diverse. It includes contest mathematics problems as well as âuniversity mathematicsâ (i.e. mathematics traditionally taught in universities at the undergraduate and beginning graduate level). The contest problems are sourced from Gelca and Andreescu [2017] and Brayman and Kukush [2018], and the university mathematics problems are sourced from Souza and Silva [2008] and Harvard University [2021]. The dataset does not include high school contest problems because those are already present in other well-known benchmarks [Hendrycks et al., 2021]. The Putnam and Brayman books both contain official solutions, which we also include in the dataset. This can be useful for fully automating the grading process, which we leave to future work.
For university mathematics, we pick Souza and Silva [2008] for its large selection of âstandardâ undergraduate mathematics problems, as well as many problems suitable for the short answer portions. We also select Harvard University [2021] because it covers topics that other collections of exams rarely not cover, such as representation theory of finite groups and algebraic topology.
3
Table 1: Types of problems in the benchmark by subject area. Number Subject 52 34 19 | 2307.13692#10 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 10 | 4
# 2. Study 2
# 2.1 Original prompts
2.1.1 Prompt
In this section, we provide a complete report of the prompts queried for each state in the game (Lost, Losing, Start, Winning, Won) and three utility levels (Low, Medium, and High) using the coin flip game adapted from [7].
For utility in (âa little bitâ, âa medium amountâ, âa lotâ):
For game_state in (âImagine you have lost the game.â, âImagine you have obtained zero heads and your opponent has two heads. It is your turn.â, âImagine the game is just about to start. It is your turn.â, âImagine you have obtained two heads and your opponent has zero heads. It is your turn.â, âImagine you have won the game.â)
âImagine you are playing a simple game with someone.
â âYou take turns flipping a coin. The first person to get four heads wins.
â
If lost or won:
"{game_state} Imagine that you were playing for {utility} of money.
"
# Else:
"{game_state} Imagine that you are playing for {utility} of money.
" | 2307.13779#10 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 10 | 2.2 WEBSITE SELECTION
To decide which categories of websites to use, we first analyzed approximately 200 examples from the authorsâ actual web browser histories. Each author delved into their browsing histories, summarizing the goal of particular segments of their browser session. Based on this, we classified the visited websites into abstract categories. We then identified the four most salient categories and implemented one instance per category based on this analysis: (1) E-commerce platforms supporting online shopping activities (e.g., Amazon, eBay), (2) social forum platforms for opinion exchanges (e.g., Reddit, StackExchange), (3) collaborative development platforms for software development (e.g., GitLab), and (4) content management systems (CMS) that manage the creation and revision of the digital content (e.g., online store management).
In addition to these platforms, we selected three utility-style tools that are frequently used in web- based tasks: (1) a map for navigation and searching for information about points of interest (POIs) such as institutions or locations (2) a calculator, and (3) a scratchpad for taking notes. As information- seeking and knowledge acquisition are critical in web-based tasks, we also incorporated various knowledge resources into WebArena. These resources range from general information hubs, such as the English Wikipedia, to more specialized knowledge bases, such as the website user manuals.
3
Under review
# Under review | 2307.13854#10 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 10 | In the next stage, the student response and the pre-defined feedback criteria are bundled into a payload dispatched to a large language model. The LLM pro- cesses this payload and produces personalized feedback to the response. This feedback is relayed back to the student with seconds of latency through the web or notebook interface, offering them the immediate op- portunity to reflect, amend, and improve their response as desired (Fig. 2). Our tool
is designed to be easily deployable and scalable. The FreeText server can be run in resource-constrained or serverless platforms such as AWS Lambda. This allows for easy deployment and scaling, which is particularly important for large-scale projects and massive-scale courses (van Viegen et al., 2021). Our API can also be combined with other ex- isting educational tools in order to capture and store student responses for instructor review.
# Question Design
Instructors can provide a question for students to answer â either programmatically, by accessing our HTTP API â or graphically in the browser using the simple web application UI. Instructors can also provide optional assessment criteria â text like âmake sure the student mentions DNA base pairs in their answer.â
Matelsky et al. | Harnessing large language models for education | 2308.02439#10 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 11 | broader deï¬nition, we are able to establish a uni- ï¬ed framework for addressing factuality issues in generative AI beyond just the textual domain.
Fine-grained Factuality One can usually detect the factuality of a given generated signal (e.g., text) at different levels of granularity, such as sentences, and documents. A more granular assessment can be particularly valuable because it (1) not only allows users to pinpoint where inaccuracies oc- cur (Liu et al., 2021) but also (2) serves as a reward model for developers to reï¬ne their generative sys- tems (Lightman et al., 2023).
However, implementing ï¬ne-grained factuality detection is challenging due to two reasons: (1) specifying the desired granularity level without am- biguity, and (2) extracting claims in line with the predetermined granularity level. In this paper, we argue that by utilizing the powerful instruction- following ability and the natural language inter- face of LLMs, we can effectively address the chal- lenge of deï¬ning and extracting ï¬ne-grained claims through claim deï¬nition-based few-shot prompting. More details can be found in §4.1. | 2307.13528#11 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 11 | 3
Table 1: Types of problems in the benchmark by subject area. Number Subject 52 34 19
Answer Type Numerical Symbolic Proof-like Mathematics Physics Numerical Numerical (w/ image) Symbolic Symbolic (w/ image) 80 18 18 13 Law Multiple Choice 627 MCAT (Reading) Multiple Choice 165 MCAT (Science) Multiple Choice Multiple Choice (w/ image) 144 37
# 3.3 Physics
The physics problems are structured similarly as the math problems. The main difference is that some physics problems contain figures, and there are more problems with numerical answers. The problems were sourced from the Major American Universities PhD Qualifying Questions and Solutions series [Zhongguo-Kexue-Jishu-Daxue, 1990].
# 3.4 MCAT
The MCAT test contains multiple choice problems testing biology, psychology, chemistry, physics, and reading comprehension. The MCAT problems are sampled from the third edition of McGraw- Hill Education 3 MCAT Practice Tests [Campbell et al., 2017] and cover both science and reading questions. This book was chosen as very few of these problems appear in standard web-searchable sources, limiting contamination. As in the previous categories, we pick problems which are self- contained. Because some MCAT science questions are accompanied by images, we accompany such questions with corresponding image files.
# 3.5 Law | 2307.13692#11 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 11 | # Else:
"{game_state} Imagine that you are playing for {utility} of money.
"
âAnswer the following questions by providing only a single number on a scale from 1 to 100, where 1 means not at all and 100 means a lot: 1. How important to you is it that you win? 2. How likely is it that you win? 3. How much control do you have over winning? 4. How much do you feel hope? 5. How much do you feel fear? 6. How much do you feel joy? 7. How much do you feel sadness? 8. How much do you feel anger?
â âPlease do not respond anything else other than the answers to the 8 questions above.
â âPlease put the answer in the following JSON format and make all data types to be string and use all lowercase. It is very important.
â
â{â1â:â â, â2â:â ", "3": "", "4": "", "5": "", "6": "", "7": "", "8": ""}
'
2.1.1 Results | 2307.13779#11 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 11 | 3
Under review
# Under review
Figure 3: We design the observation to be the URL and the content of a web page, with options to represent the content as a screenshot (left), HTML DOM tree (middle), and accessibility tree (right). The content of the middle and right figures are trimmed to save space.
Implementation We leveraged open-source libraries relevant to each category to build our own versions of an E-commerce website (OneStopShop), GitLab, Reddit, an online store content manage- ment system (CMS), a map, and an English Wikipedia. Then we imported sampled data from their real-world counterparts. As an example, our version of GitLab was developed based on the actual GitLab project.1 We carefully emulated the features of a typical code repository by including both popular projects with many issues and pull requests and smaller, personal projects. Details of all websites in WebArena can be found in Appendix A.1. We deliver the environment as dockers and provide scripts to reset the environment to a deterministic initial state (See Appendix A.2).
2.3 OBSERVATION SPACE | 2307.13854#11 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 11 | Matelsky et al. | Harnessing large language models for education
FreeText can use question content to automat- ically establish grading criteria, or it can use the as- sessment criteria to improve the text of the question. The latter process works by asking the AI to serve as a student and answer a question while oblivious to the instructorâs grading criteria. Then, the answer is auto- matically evaluated by a separate instantiation of the LLM â this time, against the instructor criteria. The assessment model determines if the student has been unfairly penalized due to omission of requirements (or a lack of clarity) in the original question text. If so, the question is updated to better encompass the require- ments of the grading criteria. | 2308.02439#11 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 12 | Structurally speaking, given a prompt (e.g., a query or instruction) and the corresponding model- generated response, the ï¬ne-grained factuality de- tection task involves the following concepts: Prompt (p) a query or instruction that users pro- vide to the generative model. Response (r) a piece of text (usually in long form) generated by the generative model. Claim (c) a statement inferred from the model re- sponse, whose granularity is deï¬ned by a natural language text. Evidence (e) The available information (e.g., knowledge base, pre-deï¬ned rules) that support or demonstrate the truth or validity of a claim.
# Instantiations in Different Scenarios
Using the above task deï¬nition, we can deï¬ne fac- tuality in different application scenarios (see also in Tab.2).
Knowledge-based QA Knowledge-based (KB) QA (Chen et al., 2017) aims to answer questions using a given knowledge base or open-domain data source (e.g., Wikipedia). In this task, we deï¬ne factuality as how well each claim in the generated answer is supported by world knowledge. In this paper, we consider a more challenging scenario: open-domain QA that requires long-form answers, rather than short ones. | 2307.13528#12 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 12 | # 3.5 Law
Applying law involves the application logical reasoning, in addition to grasping legal knowledge. This makes assessments of legal skills an especially attractive type of language model benchmark, where we are attempting to assess the reasoning and intelligence of these models. Furthermore, if the models better understand law, they can be more reliable and ultimately more useful in real-world applications, potentially even increasing the efficiency and transparency of governments more broadly.
Most lawyers in the U.S. go to law school, graduate, then study for the Bar Examination, and then must pass the bar before going on to practice law professionally. To evaluate legal understanding of the models, we use an older Bar Examination practice set that, to the best of our knowledge, is not available online in a way that could have led to its inclusion in training data for the language models that we are assessing. The practice bar exam we administer to the various language models covers most major areas of law and therefore it tests legal reasoning and broad U.S. legal knowledge.
# 4 Evaluation
We evaluate current LLMs on all text-only problems in our dataset. Other LLM benchmark papers do not evaluate on multimodal tasks due to the lack of good multimodal models; we follow suit. Given public communications about GPT-4 [OpenAI, 2023] and Gemini [Ghahramani, 2023], it is likely the physics and MCAT image problems will be useful for testing multimodal LLMs soon. | 2307.13692#12 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 12 | 2.1.1 Results
Figure SM.3 demonstrates emotion intensity from human self-report compared with GPT in response to different states of the coin-flip game. Intensity is on the y-axis, whereas reported probability of winning the game is reported on the x-axis. GPT graphs show 95% confidence intervals of the mean.
Based on the two-way ANOVA conducted on the four dependent variables (hope, fear, joy, and sadness), the main effects of relevance and game state, as well as the interaction effect between relevance and game state, as well as partial eta squared (η²) values, 95% confidence interval (CI), are summarized in Table SM.3.
5
Hope Fear Joy Sadness s se we A - ââ low utility ay ââ medium utility F Bo â ° _â oA ° â &B high utility Ee ao «| â0 a=, © me. er â ot ~ ~ ~~ 0. â â 0. © a © Frobabitty © Frovaity © Fobaviity SB Grovapiity wtâ a ° *] ° O > w = « « © e 4 & § « «| «| «| Gz ? «0 *| © Py t > oy 0 oo a 2 olf oa eT ee a ae) aot 8 Te me | 2307.13779#12 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 12 | 2.3 OBSERVATION SPACE
We design the observation space to roughly mimic the web browser experience: a web page URL, the opened tabs , and the web page content of the focused tab. WebArena is the first web environment to consider multi-tab web-based tasks to promote tool usage, direct comparisons and references across tabs, and other functionalities. The multi-tab functionality offers a more authentic replication of human web browsing habits compared to maintaining everything in a single tab. We provide flexible configuration to render the page content in many modes: (see Figure 3 for an example): (1) the raw web page HTML, composed of a Document Object Model (DOM) tree, as commonly used in past work (Shi et al., 2017; Deng et al., 2023; Li et al., 2020); (2) a screenshot, a pixel-based representation that represents the current web page as an RGB array and (3) the accessibility tree of the web page.2 The accessibility tree is a subset of the DOM tree with elements that are relevant and useful for displaying the contents of a web page. Every element is represented as its role (e.g., a link), its text content, and its properties (e.g., whether it is focusable). Accessibility trees largely retain the structured information of a web page while being more compact than the DOM representation. | 2307.13854#12 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 12 | This process of iteratively incorporating assess- ment criteria is subtly different from simply including the criteria in the question text: For example, if the question text is, âWhat is the Rosetta Stone?â and the criteria include, âMention why the Ptolemaic dynasty created the Rosetta Stoneâ, a bad question update would be to explicitly ask about the Egyptian politi- cal system, as this gives the student more information than the instructor originally intended. A better ques- tion update would be âExplain what the Rosetta Stone is and the context of its creation,â because this nudges the student to discuss the right material but does not give any new information.
# Question Presentation
There are two built-in methods to present ques- the first is a simple web API, tions to students: which can be used standalone, coupled with response- collection tools, or embedded within other web applica- tions. The second is a Jupyter Notebook widget that can be embedded in tutorial coding notebooks.
The JSON web API endpoints may be accessed directly by application code, or students can access a simple web user interface. This interface comprises a question display and a textbox for student responses (see Supplemental Materials). Feedback to students is rendered beneath the response box upon answer sub- mission, and students may reuse the same page to re- submit amended answers. | 2308.02439#12 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 13 | Prompt (p) Response (r) Claim (c) Evidence (e) Question Math problems Scientiï¬c question Long-form answer Atomic component unit Executable code Math solution Long-form review Web searched results Python library Calculator Code snippet Math calculation Tuple (paper title, year, authors) Google scholar
Table 2: Factuality deï¬nition in different tasks. âSci. Lit Reviewâ represents scientiï¬c literature review.
Code Generation The code generation task (Yin and Neubig, 2017) involves generating executable code based on a given query. We deï¬ne factual- ity in code generation as how well the generated code, as a whole, can be executed correctly within a speciï¬c programming language (e.g., Python) and fulï¬lls the provided requirements. This deï¬nition is grounded in an execution-based approach to code evaluation, which measures the correctness of gen- erated code by executing it against some test case inputs and comparing its output to the expected output.
the claim. On the other hand, the ability of LLMs to utilize multiple tools paves the way for multiple tool-augmented factuality detection. For example, by directly using ChatGPT plugins,3 we can inte- grate multiple tools into a chatbot. | 2307.13528#13 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 13 | 4
Models We evaluate ChatGPT (gpt3.5-turbo-0301), GPT 3.5 (text-davinci-003), GPT-4 with 8k context length (gpt-4-0314), and Claude (claude-v1.3-100k). We evaluate all question types using task-specific instructions and chain of thought. In chat models, we put the instructions as the system prompt; otherwise we put them at the beginning of the prompt.
In all problem types, in order to extract the modelâs final answer, we instruct the model to write its final answer at the end of the response after the delimiter ANSWER: . We then parse the model generated final answer as the remaining text after the delimiter. The response is marked as incorrect if the delimiter is not found. Due to the differences in evaluation for multiple choice versus open-ended responses, we adopt multiple evaluation procedures.
Multiple choice To evaluate multiple choice questions, we can simply compare the extracted final answer to the ground truth. A response is considered correct if the extracted choice matches the ground truth choice. With appropriate prompting, all models output a parsable answer > 97% of the time. We conduct a separate manual evaluation on a sampled subset of the questions to check that our parsing procedure is not mischaracterizing the true performance of the model. | 2307.13692#13 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13854 | 13 | We provide an option to limit the content to the contents within a viewport for all modes. This ensures that the observation can be input into a text-based model with limited context length or an image-based model with image size or resolution requirements.
2.4 ACTION SPACE
Following previous work on navigation and operation in web and embodied environments (Shi et al., 2017; Liu et al., 2018), we design a compound action space that emulates the keyboard and mouse operations available on web pages. Figure 4 lists all the available actions categorized into three distinct groups. The first group includes element operations such as clicking, hovering, typing, and key combination pressing. The second comprises tab-related actions such as opening, closing, and switching between tabs. The third category consists of URL navigation actions, such as visiting a specific URL or navigating forward and backward in the browsing history.
Building on these actions, WebArena provides agents with the flexibility to refer to elements for operation in different ways. An element can be selected by its on-screen coordinates, (x, y), or by a unique element ID that is prepended to each element. This ID is generated when traversing the Document Object Model (DOM) or accessibility tree. With element IDs, the element selection is transformed into an n-way classification problem, thereby eliminating any disambiguation efforts required from the agent or the underlying implementation. For example, issuing the action click | 2307.13854#13 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 13 | The Jupyter Notebook widget is designed to make it easy for instructors to include open-ended questions in their assignments and subject the grading of student responses to custom grading criteria. This flexibility makes it easy for instructors to tailor the tool to their specific needs and teaching style.
# Feedback to Students
Our tool provides two types of feedback to stu- dents. The first is a holistic text response that provides feedback on the entire answer as a whole. The second is span-bound feedback (referring to a specific substring of the response) that can be used to highlight specific parts of the text that are erroneous or otherwise need | 2308.02439#13 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 14 | The framework is illustrated in Fig. 1, which consists of ï¬ve main components: claim extraction, query generation, tool querying, evidence collec- tion, and agreement veriï¬cation. We elaborate each component below.
# 4.1 Claim Extraction
Math Problem Solving The math problem solv- ing task involves the use of automated methods to address mathematical problems (Cobbe et al., 2021). At the claim level, factuality in math prob- lem solving is deï¬ned as the extent to which the generated statements adhere to the calculation rules. At the response level, factuality in math problem solving is deï¬ned as how effectively the overall mathematical solution addresses the given prob- lem.
Extracting claims from responses under various task settings is challenging due to the inconsistent deï¬nitions of claims across tasks and scenarios. This inconsistency hinders the development of ap- plications such as text summarization evaluation and factuality detection. To tackle this, we propose an approach in this paper that treats claim extrac- tion as a process guided by LLM prompts based on the speciï¬c deï¬nition of claims. This approach offers the following advantages: | 2307.13528#14 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 14 | Numerical To evaluate problems with a numerical final answer, we first extract the delimited model answer as above. In the physics problems, many answers are in units; we prompt the model with information about the unit, and instruct it to fully simplify its answer and omit any units. However, sometimes the model forgets to do either or both, and so we apply a series of regexes to remove units. We then attempt to parse the result into a mathematical expression using Pythonâs SymPy library [Meurer et al., 2017]. If this parsing fails, the answer is marked as incorrect. Once parsed, we score a the model answer as correct if |model_answerâground_truth|
Symbolic Problems with symbolic answers are less structured and harder to parse. To do so, we again leverage SymPy, first normalizing expressions to contain a default set of variable names and then checking for equivalence up to a permutation of the variables. However this approach is error-prone and only works for the subset of symbolic responses in a function form. More advanced responses, such as those containing set notation, require human evaluation.
Proof-like Natural language proofs cannot be evaluated automatically; the authors with training in mathematics grade the proofs. Further manual human evaluation requires a thorough inspection of the intermediate reasoning steps. This makes evaluation expensive in practice. | 2307.13692#14 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 14 | Table SM.3 Goal-relevance Game State Interaction Effect 5 . 3 - T P G 4 - T P G Hope Fear Joy Sadness Hope Fear Joy F(2, 1485) = 2.15, p = 0.117, η² = .003 F(2, 1485) = 62.44, p < .001***, η² = .08 F(2, 1485) = 5.98, p = .002***, η² = .008 F(2, 1485) = 30.27, p < .001***, η² = .04 F(2, 1485) = 173.0, p < .001***, η² = .19 F(2, 1485) = 2241.8, p < .001***, η² = .75 F(2, 1485) = 39.67, p < .001***, η² = .05 F(2, 1485) = 364, p < .001***, η² = .33 F(4, 1485) = 579.34, p < .001***, η² = .61 F(4, 1485) = 645.67, p < .001***, η² = .63 F(4, 1485) = 2409.07, p < .001***, η² = .87 | 2307.13779#14 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 14 | 1https://gitlab.com/gitlab-org/gitlab 2https://developer.mozilla.org/en-US/docs/Glossary/Accessibility_tree
4
Under review
[1582] clicks the button given the observation of [1582] Add to Cart. This flexible element selection allows WebArena to support agents designed in various ways (e.g., accepting input from different modalities) without compromising fair comparison metrics such as step count.
User Role Simulation Users of the same website often have disparate experiences due to their distinct roles, permissions, and interaction histories. We emulate this scenario by generating unique user profiles on each platform. The details can be found in Appendix A.3.
# 3 BENCHMARK SUITE OF WEB-BASED TASKS
We provide a benchmark with 812 test examples on grounding high-level natural language instructions to interactions in WebArena. Each example has a metric to evaluate the functional correctness of the task execution. In this section, we first formally define the task of controlling an autonomous agent through natural language. Then we introduce the annotation process of our benchmark.
INTENT COLLECTION | 2307.13854#14 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 14 | # Körding Lab
| 3
A Educator APL DB Question & Grading Criteria, Question & Grading Criteria Question ID Educator APL DB from Freetext_jupyter import Freetextitidget widget = FreetextWidget("07b2c3ef-0F97-46bc-at1e-fcSc06c381c2") widget. display() User APL DB LLM Student Response ee Response Vi and Attack Question Retrieval Ll" Question & Prompt Payload = Question & Response Combined Prompt Payload Evaluation Validation + Validated Response «<â________1 User API DB LLM
Figure 2. A sequence diagram illustrating the flow of information within the FreeText system. A. First, an instructor formulates a question by supplying a student-facing question (âQuestionâ) along with grading criteria for the LLM to evaluate student responses. In return, the educator obtains a unique identifier from the database, instrumental in retrieving the question text in the following step. B. Equipped with a unique Question identifier, a student provides an answer to the educatorâs query (âResponseâ). The API receives this request, pairing the Response with a Prompt based upon the educatorâs question and criteria, and directs them towards a large language model for evaluation. C. A screenshot of the FreeText Jupyter widget integrated into an interactive code notebook. | 2308.02439#14 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 15 | Scientiï¬c Literature Review Writing The sci- entiï¬c literature review writing task (Jha et al., 2015) aims to analyze and synthesize existing re- search on a speciï¬c topic in a ï¬eld of study. In this task, we deï¬ne factuality as whether the generated scientiï¬c literature review correctly cites existing scientiï¬c literature, including the correct mention of authors and publication years.2
(i) Leveraging the strong instruction-following capabilities of LLMs can signiï¬cantly reduce the costs associated with data annotation and model training for claim extraction.
(ii) When developing a system or constructing a dataset for an application that relies on the def- inition of claims, one simply needs to provide a textual deï¬nition of the claim using a large model. This enables future researchers to effectively utilize these deï¬nitions as a foundation in their work.
# 4 Approach
We propose a tool-augmented framework for detect- ing factual errors that can apply a uniï¬ed approach across various tasks. The motivation for using tools is twofold. On one hand, each tool embodies the domain expertise, assisting us in the effective gath- ering of evidence that veriï¬es the correctness of | 2307.13528#15 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 15 | Model-based evaluation To address the difficulties in developing automated metrics for evaluating more advanced problems, we experiment with two model based approaches. First, we prompt ChatGPT to grade the equivalence of two symbolic expressions with score options 0 when the totally incorrect, 0.5 when the symbolic expressions are nearly the same e.g. equivalent up to a constant, and 1 when they are an exact match. Our prompting strategy can be found in the supplementary material.
More generally, we evaluate the capabilities of GPT-4 to grade intermediate reasoning chains via a rubric-based evaluation approach. For symbolic and proof-like problems, we few-shot prompt GPT-4 to create a 10-point rubric. This is done by handwriting a small set of initial rubrics for proof-like problems and prompting the model with these examples and the ground truth reference solution. The model assigns point values to intermediate steps using the reference solution as a guide. This process is illustrated in the supplementary material.
With model generated rubrics in hand, we then evaluate each question against its rubric. This is done by again prompting GPT-4 to go step by step through the model answer and assign partial credit based on the rubric. This provides a denser automatic evaluation metric on increasingly unstructured answers. As a nice side benefit, it makes human evaluation of complex symbolic questions much easier, significantly reducing the amount of time required per question.
# 4.1 Results | 2307.13692#15 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 15 | p < .001***, η² = .63 F(4, 1485) = 2409.07, p < .001***, η² = .87 F(4, 1485) = 691.91, p < .001***, η² = .65 F(4, 1485) = 2035.9, p < .001***, η² = .85 F(4, 1485) = 490.0, p < .001***, η² = .57 F(4, 1485) = 8182.93, p < .001***, η² = .96 F(4, 1485) = 3001, p < .001***, η² = .89 F(8, 1485) = 15.49, p < .001***, η² = .08 F(8, 1485) = 21.81, p < .001***, η² = .11 F(8, 1485) = 6.34, p < .001***, η² = .03 F(8, 1485) = 19.25, p < .001***, η² = .09 F(8, 1485) = 135.6, p < .001***, η² = .42 F(8, 1485) = 143.2, p < .001***, η² | 2307.13779#15 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 15 | INTENT COLLECTION
We focus on curating realistic intents to carry out complex and creative tasks within WebArena. To start with, our annotators were guided to spend a few minutes exploring the websites to familiarize themselves with the websitesâ content and functionalities. As most of our websites are virtually identical to their open-web counterparts, despite having sampled data, most annotators can quickly comprehend the websites.
Next, we instructed the annotators to formulate intents based on the following criteria:
(1) The intent should be abstract and high-level, implying that the task cannot be fulfilled with merely one or two actions. As an example, instead of âclick the science subredditâ, we encouraged annotators to come up with something more complex like âpost a greeting message on science subredditâ, which involves performing multiple actions.
(2) The intent should be creative. Common tasks such as account creation can be easily thought of. We encouraged the annotators to add constraints (e.g., âcreate a Reddit account identical to my GitLab oneâ) to make the intents more unique. | 2307.13854#15 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 15 | student attention. For example, if a studentâs answer is correct but they misattribute a quote, the FreeText server could highlight the attribution specifically to give feedback. The type of feedback returned can be speci- fied by the instructor during question creation.
# Discussion
Here we introduced FreeText, a framework capa- ble of defining questions, collecting student responses, transmitting these responses alongside instructor ex- pectations to a large language model (LLM), and gener- ating rapid and personalized feedback for the students. Notably, the entirety of the student-facing workflow can be encapsulated within a Jupyter notebook, facilitating real-time enhancement of studentsâ understanding of the course material. FreeText is not confined to a web application and Jupyter notebooks, or the academic subjects mentioned above. The FreeText Server can integrate with any application that consumes a JSON HTTP API, expanding its potential to a wider range of educational settings. | 2308.02439#15 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 16 | 2In this paper, our focus lies in examining the consistency of the relationship between the paper title, authors, and publi- cation year. However, the task of determining the suitability of the cited paper as the most appropriate choice is left for future investigation.
(iii) Our experiments demonstrate that the claim extraction module, implemented by ChatGPT, ex- hibits strong performance in extracting claims (atomic component units). The detailed results of these experiments are discussed in Section 6.1.
Here, we employ ChatGPT as a base LLM and apply different textual deï¬nitions of claims across four tasks. Our goal is to extract all veriï¬able claims within the generated text x, denoted as
3https://openai.com/blog/ chatgpt-plugins | 2307.13528#16 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 16 | # 4.1 Results
We now evaluate gpt-4, gpt-3.5-turbo, text-davinci-003, and claude-v1.3 on ARB. The results for the mechanically scored subjects are in Figure 1.
5
Mmm claude-v1.3 ~ 084 Gill text-davinci-003 3 Ml gpt-3.5-turbo LS Mmm gpt-4 > VY 064 oO â_ =| iv) y) <q 044 ra U oO a 5 (0.24 wn i 0.0 â. | % % % % %, i > Sp âyy %s % % & % % 2 Y% & © e My, %, %G % &. e Kos ns ey y
Figure 1: Accuracy of models over automatically scored components of the ARB benchmark. Numerical questions are evaluated with a relative error threshold of 10â2.
We see models generally do quite well on the multiple choice Law and MCAT subsets, but struggle significantly on questions with numerical final answers. GPT-4 is the only model capable of reliably simplifying complex expressions, but even GPT-4 struggles to reliably perform arithmetic and symbolic manipulations over long contexts. | 2307.13692#16 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13854 | 16 | (3) The intent should be formulated as a template by making replaceable elements as variables. The annotators were also responsible for developing several instantiations for each variable. For example, the intent âcreate a Reddit account identical to my GitLab oneâ can be converted into âcreate a {{site1}} account identical to my {{site2}} oneâ, with an instantiation like â{site1: Reddit, site2: GitLab}â and another like â{site1: GitLab, site2: OneStopShopping}â. Notably, tasks derived from the same template can have distinct execution traces. The similarity resides primarily in the high-level semantics rather than the specific implementation.
We also provided a prompt for the annotators to use with ChatGPT3 for inspiration, that contains an overview of each website and instructs the model to describe potential tasks to be performed on these sites. Furthermore, we offered a curated list of examples for annotators to reference.
Intent Analysis template is instantiated to 3.3 examples. The intent distribution is shown in Figure 6.
Furthermore, we classify the intents into three primary categories with examples shown in Figure 5: | 2307.13854#16 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 16 | Our systemâs broad applicability becomes evident when considering diverse learning models, such as the pod-based approach adopted by the online course Neu- romatch Academy (van Viegen et al., 2021) in the field of computational neuroscience. In such settings, small student groups or âpodsâ collaboratively tackle assign- ments and projects. Teaching Assistants, tasked with providing feedback, can benefit from our tool, as it can streamline grading processes, reducing potential for attentional errors and freeing up instructors to deliver more personalized guidance to students.
Fully automated student evaluation is challenging both from a technical perspective and from a human | 2308.02439#16 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 17 | ) Knowledge-based QA Prompt ( Who is the CEO of Twitter? LLM Response Claims The CEO of Twitter at the time of writing this answer is Jack Dorsey. He co-founded Twitter in 2006 the CEO of Twitter unded Twitter in 2006 Evidence (1.1) Noah Glass, Evan Williams, and Biz Stone co-founded Odeo. (1.2) Former NBC Universal advertising chief Linda Yaccarino will become. Query \Generation| Queries (1.1) Is Jack Dorsey the CEO of Twitter? (1.2) Who is the current CEO _ of Twitter? (2.1) Did Jack Dorsey co-found âTwitter in 2006? Scores Claim-level Factuality: [0, 1, ...] Response-level Factuality: 0 <q Prompt | Retum a string containing space-delimited Code Generation numbers starting from 0 up to n inclusive. LLM Response al def string_sequence(n): def string_sequence(n) result="" Claim result = "" for iin range(n+1): Extraction fori inrange(a+l): result += str(i) +" result += str(i) + return result.strip() return result.strip() Query âTest Cases Generation) Exec Results (J) string_sequence(4) (2) | 2307.13528#17 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 17 | On the multiple-choice questions, the only model that cannot reliably follow the answer formatting instructions is gpt-3.5-turbo. This happens for a variety of reasons, including the model refusing to answer or to commit to a single answer choice. On the Law benchmark, gpt-3.5-turbo does not output a parsable answer around 25% of the time. The other models exhibit this failure in less than 5% of multiple-choice questions, with GPT-4 being correctly parsed over 99% of the time.
We see a similarly low performance profile across models on symbolic problems, reported in Table 2.
Table 2: Manually parsed scores for symbolic answer questions.
# Math Symbolic 18% 12% 3% 3%
# Physics Symbolic 28% 6% 6% 11%
Math Symbolic â Physics Symbolic
gpt-4-0314 gpt-3.5-turbo-0301 text-davinci-003 claude-v1.3-100k
# 4.2 What Kind of Errors Do LLMs Make? | 2307.13692#17 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 17 | 6
Figure SM.4 illustrates emotional distancing/engagement from the goal of winning as a function of the game state. The left shows human self-report, and the middle and right are predictions from GPT models. Both models fail to predict engagement.
Human 30 mm Lost 20 jm Losing g Ml Winning s 10 mmm Won 5 g E 0 ° 2-10 is â20 -30 - Low Relevance High Relevance GPT-3.5 30 mE Lost mmm Losing mE Winning = Won Change in Importance co Change in Importance GPT-4 mE Lost mm Losing mE Winning mmm Won - i 0 Low Relevance Medium Relevance High Relevance Low Relevance Medium Relevance High Relevance
Figure SM.4: Consequence derivation results (corresponding to Fig 9. in the paper)
ANOVA results show that there are significant main effects of relevance and game state, as well as a sig- nificant interaction effect between them on importance. Table SM.4 provides a summary of the results. | 2307.13779#17 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 17 | Intent Analysis template is instantiated to 3.3 examples. The intent distribution is shown in Figure 6.
Furthermore, we classify the intents into three primary categories with examples shown in Figure 5:
(1) Information-seeking tasks expect a textual response. Importantly, these tasks in WebArena often require navigation across multiple pages or focus on user-centric content. This makes them distinct from open-domain question-answering (Yang et al., 2018; Kwiatkowski et al., 2019), which focuses on querying general knowledge with a simple retrieval step. For instance, to answer âWhen was the last time I bought the shampooâ, an agent traverses the userâs purchase history, checking order details to identify the most recent shampoo purchase.
(2) Site navigation: This category is composed of tasks that require navigating through web pages using a variety of interactive elements such as search functions and links. The objective is often to locate specific information or navigate to a particular section of a site.
# 3https://chat.openai.com/
5
# Under review | 2307.13854#17 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 17 | perspective, and thus FreeText is designed not to fully automate grading, but to serve as a useful tool ben- efiting both students and instructors. FreeText bene- fits students by providing rapid and personalized feed- back on short-answer questions. FreeText benefits in- structors by helping them to design better questions and grading criteria, by providing first-pass material for learning assessments, and by alleviating some of the burden of providing individualized instruction in large classes. LLMs in general, and FreeText specifically, are not a replacement human instructors, but they can nonetheless fill a niche among education technologies. LLMs undoubtedly hold immense power and po- tential. However, it is crucial to have an in-depth dis- cussion about their ethical implications, especially in ed- ucation. A key issue to consider is the potential biases that LLMs can introduce. These biases could uninten- tionally touch on sensitive subjects or unintentionally overlook marginalized groups. Instructors have a role to play by carefully designing their questions and as- sessment criteria. Further, students should be made aware of the nature of the system they are interacting with and its potential to make mistakes or act on inter- nalized biases (Hsu et al., 2021). On the other hand, automated systems such as FreeText present an op- portunity to reduce instructorsâ unconscious biases by evaluating all studentsâ responses equally and without any explicit identification. | 2308.02439#17 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 18 | result += str(i) + return result.strip() return result.strip() Query âTest Cases Generation) Exec Results (J) string_sequence(4) (2) string_sequence(0) (3) string_sequence(3) âcores, Response-level Factuality: | Math Problem Solvin; Prompt | Marie ordered 5 packs of milk that costs $3 each, and some boxes of pizza. Marie Math Claims paid a total of $45. How many bo pizza did Marie order if each box c Claim 7) 45-15 LLM Response Extraction (@)30 - Marie ordered 5 packs of milk that costs $3 each. The total cost of these items is: 5*3 = $15 To find the cost of the boxes of pizza, we subtract the total cost of the other items from the total amount paid by Marie: 45 - 15 = $30 Queries Generation (1) print(round(5*3, 7) == 15) (2) print(round(45-15, 7)=30) (3) print(round(30/3, 7) == 10) To find the number of boxes of pizza, we divide the remaining amount by the cost of each box: 30/3 = 10 ~ Therefore, Marie ordered 10 boxes of pizza, Scores Exei Response-level Rest | 2307.13528#18 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 18 | # 4.2 What Kind of Errors Do LLMs Make?
The GPT-4 evaluation paper [Bubeck et al., 2023] classified errors GPT-4 makes in single-pass evaluation on GSM8K [Cobbe et al., 2021] and MATH [Hendrycks et al., 2021] into three types: arithmetic mistakes, misunderstood statement, and wrong approach. We make a more fine-grained analysis and extend it to math and physics problems in our dataset. The results are in Table 3.
The errors current LLMs make on the Mathematics part of ARB fall into five general types:
Misunderstanding / answering only a part of the question / misread problem; ⢠Wrong approach: the modelâs early chain of thought does not guess the right approach;
6
Table 3: Mistakes on mathematics and physics problems in ARB, GPT-4. Logical error Arithmetic Correct answer 3% 16% n/a 6% 28%
Misread Wrong problem approach 25% 50% 50% 80% 37% or hallucination 88% 29% 72% 53% 68% mistake 48% 4% 16% 6% 31% 0% 16% 5% 0% 0% Correct reasoning 3% 16% 5% 6% 12%
⢠Logical errors: the model uses a false implication between two statements; | 2307.13692#18 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 18 | Table SM.4 (Table 4 in the paper) Impact of game state and relevance on importance of winning F value p η² (partial) Goal-relevance 41.73 p < .001*** 0.05 5 . 3 - T P G Game State 59.55 p < .001*** 0.14 Interaction Effect 9.85 p < .001*** 0.05 Goal-relevance 78091.57 p < .001*** 0.99 4 - T P G Game State 17.05 p < .001*** 0.04 Interaction Effect 12.10 p < .001*** 0.06
# 2.2 Prompt engineering
2.2.1 Prompt
We applied incremental adjustments to the original description given to human subjects to fix the GPTâs inaccurate assignment of winning likelihood to the âlost/wonâ case. We assumed the model might not have understood the completed state of the game. Thus, we added extra reminders within the description for âlostâ and âwonâ cases in a stepwise fashion to see a noticeable shift in the responses. GPT presumably evaded emotion related questions by returning generic and non-committal responses. For example, it re- turned 50 when asked to give a number between 0 and 100. In some cases, the model returned all zeros. Thus, we also added a final statement to mitigate such behavior. The final adjusted prompts are as follows: | 2307.13779#18 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 18 | # 3https://chat.openai.com/
5
# Under review
Action Type Description Category Example noop click(elem) hover(elem) type(elem, text) press(key_comb) Do nothing Click at an element Hover on an element Type to an element Press a key comb Information Seeking When was the last time I bought shampoo Compare walking and driving time from AMC Waterfront to Randyland tab_focus(index) new_tab tab_close focus on i-th tab Open a new tab Close current tab Site Navigation Checkout merge requests assigned to me Show me the ergonomic chair with the best rating go_back go_forward goto(URL) Visit the last URL Undo go_back Go to URL Content & Config Post to ask âwhether I need a car in NYCâ
Figure 4: Action Space of WebArena
Figure 5: Example intents from three categories.
(3) Content and configuration operation: This category encapsulates tasks that require operating in the web environment to create, revise, or configure content or settings. This includes adjusting settings, managing accounts, performing online transactions, generating new web content, and modifying existing content. Examples range from updating a social media status or README file to conducting online purchases and configuring privacy settings.
3.2 EVALUATION ANNOTATION | 2307.13854#18 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 18 | Furthermore, we must consider the broader dy- namics of the AI ecosystem. The realm of LLMs is not limited to the offerings of large AI conglomerates like OpenAI. A burgeoning industry of alternative LLMs, both from smaller commercial entities and open-source initiatives (Anthropic, 2023; Taori et al., 2023; Tou- vron et al., 2023; Wolf et al., 2020), is flourishing. Our
Matelsky et al. | Harnessing large language models for education | 2308.02439#18 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
2307.13528 | 19 | we divide the remaining amount by the cost of each box: 30/3 = 10 ~ Therefore, Marie ordered 10 boxes of pizza, Scores Exei Response-level Rest Factuality:1 U FacTool Prompt Scientific Literature Review Writing Discuss the applications and limitations of quantum computing, citing at least one relevant paper. When citing papers, please include the title, the author(s), and the publication year. (1) {title: Quantum Computing in the NISQ era and beyond, authors: John Preskill, publication_year: 2018} Query Queries Generation (1) Quantum Computing in the NISQ era and beyond Evidence (1) {title: Quantum Computing in the NISQ era and beyond, authors: Johnâ Preskill, publication_year: 2018} LLM Response Quantum computing has the potential to revolutionize various fields such as cryptography, optimization, and simulation. However, there are also limitations such as the need for error correction. One papers that have contributed to this field is âQuantum Computing in the NISQ era and beyondâ by John Preskill (2018). Scores Claim-level Factuality: [1] Response-level Factuality: 1 | 2307.13528#19 | FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios | The emergence of generative pre-trained models has facilitated the synthesis
of high-quality text, but it has also posed challenges in identifying factual
errors in the generated text. In particular: (1) A wider range of tasks now
face an increasing risk of containing factual errors when handled by generative
models. (2) Generated texts tend to be lengthy and lack a clearly defined
granularity for individual facts. (3) There is a scarcity of explicit evidence
available during the process of fact checking. With the above challenges in
mind, in this paper, we propose FacTool, a task and domain agnostic framework
for detecting factual errors of texts generated by large language models (e.g.,
ChatGPT). Experiments on four different tasks (knowledge-based QA, code
generation, mathematical reasoning, and scientific literature review) show the
efficacy of the proposed method. We release the code of FacTool associated with
ChatGPT plugin interface at https://github.com/GAIR-NLP/factool . | http://arxiv.org/pdf/2307.13528 | I-Chun Chern, Steffi Chern, Shiqi Chen, Weizhe Yuan, Kehua Feng, Chunting Zhou, Junxian He, Graham Neubig, Pengfei Liu | cs.CL, cs.AI | null | null | cs.CL | 20230725 | 20230726 | [
{
"id": "2110.14168"
},
{
"id": "2201.08239"
},
{
"id": "2211.10435"
},
{
"id": "2303.12712"
},
{
"id": "2304.05128"
},
{
"id": "2305.11206"
},
{
"id": "2212.07981"
},
{
"id": "2303.01432"
},
{
"id": "2207.10397"
}
] |
2307.13692 | 19 | ⢠Logical errors: the model uses a false implication between two statements;
⢠Hallucinating facts or theorems: the model confabulates a statement that is false in general, or not applicable in context;
⢠Arithmetic/calculation error: the model multiplies incorrectly, omits a term in an expression, gives a wrong numerical value for a fraction, and other similar mistakes.
We grade GPT-4 using the above as a guideline. Our grading of the modelâs CoT answers is not mutually exclusive; if the model both uses an approach that doesnât go anywhere and makes a calculation error in it, we count it towards both categories. Note that the errors might not be independent: arithmetic mistakes could be more or less frequent in wrong approach solutions as opposed to the solutions with correct idea. We notice that the model is likely to make incorrect simplifications to get to some final answer in approaches that cannot work; this is expected, as prompting the model to produce a solution with a final answer often leads it to produce some final answer by any means.
When the model outputs a chain of implications, it is not always clear whether some false statement is due to a logical error, or it is a straight-out confabulation. We merge those two error types in Table 3. | 2307.13692#19 | ARB: Advanced Reasoning Benchmark for Large Language Models | Large Language Models (LLMs) have demonstrated remarkable performance on
various quantitative reasoning and knowledge benchmarks. However, many of these
benchmarks are losing utility as LLMs get increasingly high scores, despite not
yet reaching expert performance in these domains. We introduce ARB, a novel
benchmark composed of advanced reasoning problems in multiple fields. ARB
presents a more challenging test than prior benchmarks, featuring problems in
mathematics, physics, biology, chemistry, and law. As a subset of ARB, we
introduce a challenging set of math and physics problems which require advanced
symbolic reasoning and domain knowledge. We evaluate recent models such as
GPT-4 and Claude on ARB and demonstrate that current models score well below
50% on more demanding tasks. In order to improve both automatic and assisted
evaluation capabilities, we introduce a rubric-based evaluation approach,
allowing GPT-4 to score its own intermediate reasoning steps. Further, we
conduct a human evaluation of the symbolic subset of ARB, finding promising
agreement between annotators and GPT-4 rubric evaluation scores. | http://arxiv.org/pdf/2307.13692 | Tomohiro Sawada, Daniel Paleka, Alexander Havrilla, Pranav Tadepalli, Paula Vidas, Alexander Kranias, John J. Nay, Kshitij Gupta, Aran Komatsuzaki | cs.CL, cs.LG | Submitted to NeurIPS Datasets and Benchmarks Track | null | cs.CL | 20230725 | 20230728 | [
{
"id": "2212.14402"
},
{
"id": "1911.01547"
},
{
"id": "2206.04615"
},
{
"id": "2206.05442"
},
{
"id": "2302.04166"
},
{
"id": "2303.12712"
},
{
"id": "2302.14520"
},
{
"id": "2212.09251"
},
{
"id": "2302.13814"
},
{
"id": "2204.01075"
},
{
"id": "2305.14763"
},
{
"id": "2305.01937"
},
{
"id": "2303.16634"
}
] |
2307.13779 | 19 | For utility in (âa little bitâ, âa medium amountâ, âa lotâ):
For game_state in (âImagine you have lost the game.â, âImagine you have obtained zero heads, and your opponent has two heads. It is your turn.â, âImagine the game is just about to start. It is your turn.â, âImagine you have obtained two heads and your opponent has zero heads. It is your turn.â, âImagine you have won the game.â)
7
"Imagine you are playing a simple game with someone.
" "You take turns flipping a coin. The first person to get four heads wins.
" If lost or won:
"{game_state} Imagine that you were playing for {utility} of money. Keep in mind that the game is over now.
"
Else:
"{game_state} Imagine that you are playing for {utility} of money.
" | 2307.13779#19 | Is GPT a Computational Model of Emotion? Detailed Analysis | This paper investigates the emotional reasoning abilities of the GPT family
of large language models via a component perspective. The paper first examines
how the model reasons about autobiographical memories. Second, it
systematically varies aspects of situations to impact emotion intensity and
coping tendencies. Even without the use of prompt engineering, it is shown that
GPT's predictions align significantly with human-provided appraisals and
emotional labels. However, GPT faces difficulties predicting emotion intensity
and coping responses. GPT-4 showed the highest performance in the initial study
but fell short in the second, despite providing superior results after minor
prompt engineering. This assessment brings up questions on how to effectively
employ the strong points and address the weak areas of these models,
particularly concerning response variability. These studies underscore the
merits of evaluating models from a componential perspective. | http://arxiv.org/pdf/2307.13779 | Ala N. Tak, Jonathan Gratch | cs.CL, cs.AI, cs.CY, cs.HC | null | null | cs.CL | 20230725 | 20230725 | [
{
"id": "2302.08399"
}
] |
2307.13854 | 19 | 3.2 EVALUATION ANNOTATION
Evaluating Information Seeking Tasks To measure the correctness of information-seeking tasks where a textual answer is expected, we provide the annotated answer aâ for each intent. The aâ is further compared with the predicted answer Ëa with one of the following scoring functions rinfo(Ëa, aâ). First, we define exact_match where only Ëa that is identical with aâ receives a score of one. This function is primarily applicable to intent types whose responses follow a more standardized format, similar to the evaluation on question answering literature (Rajpurkar et al., 2016; Yang et al., 2018). Second, we create must_include where any Ëa containing aâ receives a score of one. This function is primarily used in when an unordered list of text is expected or where the emphasis of evaluation is on certain key concepts. In the second example in Table 1, we expect both the correct name and the email address to be presented, irrespective of the precise wording used to convey the answer. | 2307.13854#19 | WebArena: A Realistic Web Environment for Building Autonomous Agents | With advances in generative AI, there is now potential for autonomous agents
to manage daily tasks via natural language commands. However, current agents
are primarily created and tested in simplified synthetic environments, leading
to a disconnect with real-world scenarios. In this paper, we build an
environment for language-guided agents that is highly realistic and
reproducible. Specifically, we focus on agents that perform tasks on the web,
and create an environment with fully functional websites from four common
domains: e-commerce, social forum discussions, collaborative software
development, and content management. Our environment is enriched with tools
(e.g., a map) and external knowledge bases (e.g., user manuals) to encourage
human-like task-solving. Building upon our environment, we release a set of
benchmark tasks focusing on evaluating the functional correctness of task
completions. The tasks in our benchmark are diverse, long-horizon, and designed
to emulate tasks that humans routinely perform on the internet. We experiment
with several baseline agents, integrating recent techniques such as reasoning
before acting. The results demonstrate that solving complex tasks is
challenging: our best GPT-4-based agent only achieves an end-to-end task
success rate of 14.41%, significantly lower than the human performance of
78.24%. These results highlight the need for further development of robust
agents, that current state-of-the-art large language models are far from
perfect performance in these real-life tasks, and that WebArena can be used to
measure such progress. | http://arxiv.org/pdf/2307.13854 | Shuyan Zhou, Frank F. Xu, Hao Zhu, Xuhui Zhou, Robert Lo, Abishek Sridhar, Xianyi Cheng, Tianyue Ou, Yonatan Bisk, Daniel Fried, Uri Alon, Graham Neubig | cs.AI, cs.CL, cs.LG | Our code, data, environment reproduction resources, and video
demonstrations are publicly available at https://webarena.dev/ | null | cs.AI | 20230725 | 20231025 | [
{
"id": "2112.09332"
},
{
"id": "2306.00245"
},
{
"id": "2307.12856"
},
{
"id": "2305.14257"
}
] |
2308.02439 | 19 | Körding Lab | 4
framework is designed to be model-agnostic and can be readily adapted to integrate these alternative models. Reliance solely on models from a single entity such as OpenAI raises two significant concerns. First, it centralizes the concentration of AI development re- sources and power, thereby exacerbating the already pronounced inequalities in the global AI landscape. Sec- ond, it can lead to a homogenization of the knowledge and perspectives propagated by AI models, potentially resulting in a limited and biased worldview. FreeText is therefore deliberately agnostic to the underlying LLM model and technologies.
We intend for our tool to enrich and expand stu- dentsâ educational experience, particularly in large-scale or resource-constrained course settings where detailed human intervention may be limited. Ongoing work in- cludes the careful critique and evaluation of FreeText outputs by expert instructors, taking advantage of up- coming opportunities to apply this technology in a large class setting.
Embracing both technical as well as human diver- sity helps mitigate many of the concerns raised above and enriches the AI ecosystem. A broad range of per- spectives stalls the monopolization of AI technology and fosters a more balanced, equitable, and robust AI land- scape. This viewpoint aligns with our belief in the need for broad and diverse human inputs, both in the creation of AI models and in their applications in society. | 2308.02439#19 | A large language model-assisted education tool to provide feedback on open-ended responses | Open-ended questions are a favored tool among instructors for assessing
student understanding and encouraging critical exploration of course material.
Providing feedback for such responses is a time-consuming task that can lead to
overwhelmed instructors and decreased feedback quality. Many instructors resort
to simpler question formats, like multiple-choice questions, which provide
immediate feedback but at the expense of personalized and insightful comments.
Here, we present a tool that uses large language models (LLMs), guided by
instructor-defined criteria, to automate responses to open-ended questions. Our
tool delivers rapid personalized feedback, enabling students to quickly test
their knowledge and identify areas for improvement. We provide open-source
reference implementations both as a web application and as a Jupyter Notebook
widget that can be used with instructional coding or math notebooks. With
instructor guidance, LLMs hold promise to enhance student learning outcomes and
elevate instructional methodologies. | http://arxiv.org/pdf/2308.02439 | Jordan K. Matelsky, Felipe Parodi, Tony Liu, Richard D. Lange, Konrad P. Kording | cs.CY, cs.AI | null | null | cs.CY | 20230725 | 20230725 | [
{
"id": "2106.01399"
},
{
"id": "2307.09288"
},
{
"id": "1902.09183"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.