id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2307.13528#104
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
Aug 1, 2019"] } [Conï¬ icting Evidences] { "claim": "Jupiter has a density of 1.33 grams per cubic centimeter", "label": true, "with_tool_classiï¬ cation": false, "with_tool_reasoning": "The given text is mostly in line with the provided evidences. There is a minor dis- crepancy between the two evidences, with one stating the density of Jupiter as 1.326 g/cmË 00b3 and the other as 1.33 g/cmË 00b3. However, the difference is minimal and could be due to rounding.", "queries": ["Jupiter density grams per cubic centime- ter", "Density of Jupiter"], "evidences": ["Density of Jupiter: However, at 1.326 g/cm3, it is also the second-most dense of the gas gi- ants. Feb 17, 2016", "1.33 g/cmË 00b3"] } [Ambiguity in descriptions] { "claim": "Fortune cookies are enjoyed by people all over the world.", "label": false, "with_tool_classiï¬ cation": true, "with_tool_reasoning": "The given text states that for- tune cookies are enjoyed by people all over the world. While the evidence provided does not explicitly men- tion the global enjoyment of fortune cookies, it does mention that 3 billion fortune cookies are made each year globally, with the majority consumed in the US. This suggests that fortune cookies are indeed enjoyed by people in various parts of the world.", "queries": ["Origin of fortune cookies", "Popularity of fortune cookies worldwide"], "evidences": ["Japan", "There are approximately 3 bil- lion fortune cookies made each year globally, the ma- jority of them consumed in the US. The largest man- ufacturer of the cookies is Wonton Food, Inc., head- quartered in Brooklyn, New York. They make over 4.5 million fortune cookies per day."] }
|
2307.13528#103
|
2307.13528#105
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#105
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
Figure 15: Some error cases of FACTOOL on KB-QA. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬ nal decision is also reported. [Limited variety in synthetic testcases] { "task_id": "HumanEval/64", "prompt": "Write a function vowels_count(s) which takes a string repre- senting a word as input and returns the number of vowels in the string. Vowels in this case are â aâ , â eâ , â iâ , â oâ , â uâ . Here, â yâ is also a vowel, but only when it is at the end of the given word.", "entry_point": "vowels_count", "completion": d e f v o w e l s _ c o u n t ( s ) : v o w e l s = [ â a â , c o u n t = 0 f o r i i f â e â , â i â , i n range ( l e n ( s ) ) : s [ i ] i n v o w e l s : â o â , â u â ] c o u n t += 1 i f i == l e n ( s ) â 1 and s [ i ] == â y â : c o u n t += 1 r e t u r n c o u n t "result": "failed: AssertionError: Test 2", "passed": false, "with_tool_classiï¬ cation": true, "testcases_input": els_count(â rhythmâ )", "vowels_count(â beautifulâ )"], "multi_solutions": [ ["vowels_count(â encyclopediaâ )", d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r i , c i n enumerate ( s ) : "vow- i f c . l o w e r ( ) i n v o w e l s : c o u n t += 1 e l i f c . l o w e r ( ) == â y â and i == l e n ( s ) â
|
2307.13528#104
|
2307.13528#106
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#106
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
1 : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r c h a r i n s : i f c h a r . l o w e r ( ) c o u n t += 1 s [ â 1 ] . l o w e r ( ) == â y â : i f i n v o w e l s : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = â a e i o u â c o u n t = 0 f o r c h a r i n s : i f c h a r . l o w e r ( ) c o u n t += 1 s [ â 1 ] . l o w e r ( ) == â y â : i f i n v o w e l s : c o u n t += 1 r e t u r n c o u n t d e f v o w e l s _ c o u n t ( s ) : v o w e l s = [ â a â , c o u n t = 0 f o r i i f â e â , â i â , i n range ( l e n ( s ) ) : s [ i ] i n v o w e l s : â o â , â u â ] c o u n t += 1 i f i == l e n ( s ) â 1 and s [ i ] == â y â
|
2307.13528#105
|
2307.13528#107
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#107
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
: c o u n t += 1 r e t u r n c o u n t ] "exec_result": [[5, 5, 5, 5], [0, 0, 0, 0], [5, 5, 5, 5]] } # "vow- [Potential errors in code generation] { "task_id": "HumanEval/79", "prompt": d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : " " " You w i l l be g i v e n a number and y o u r i t i s The f u n c t i o n s h o u l d r e t u r n a s t r i n g , w i t h e a c h c h a r a c t e r Each c h a r a c t e r T h e r e w i l l be an e x t r a c o u p l e o f t h e b e g i n n i n g c h a r a c t e r s and a t s t r i n g . The e x t r a c h a r a c t e r s a r e t h e r e t o h e l p w i t h t h e f o r m a t . i n d e c i m a l f o r m t a s k t o c o n v e r t t o b i n a r y f o r m a t . r e p r e s e n t i n g a b i n a r y number . i n t h e s t r i n g w i l l be â 0 â o r â 1 â . â db â
|
2307.13528#106
|
2307.13528#108
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#108
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
a t t h e end o f t h e " " " , "entry_point": "decimal_to_binary", "completion": d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ l i s t = [ ] w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 b i n a r y _ l i s t . a p p e n d ( s t r ( r e m a i n d e r ) ) d e c i m a l = d e c i m a l b i n a r y _ l i s t . r e v e r s e ( ) b i n a r y _ s t r i n g = â
|
2307.13528#107
|
2307.13528#109
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#109
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
â . j o i n ( b i n a r y _ l i s t ) r e t u r n â db â + b i n a r y _ s t r i n g + â db â / / 2 "result": "failed: AssertionError: ", "passed": false, "with_tool_classiï¬ cation": true, "testcases_input": ["decimal_to_binary(10)", "decimal_to_binary(255)", "dec- imal_to_binary(0)"], "multi_solutions": [ d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ n u m = â â w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l / / 2 b i n a r y _ n u m = s t r ( r e m a i n d e r ) + b i n a r y _ n u m r e t u r n â db â + b i n a r y _ n u m + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y = â â w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l b i n a r y = s t r ( r e m a i n d e r ) + b i n a r y / / 2 r e t u r n â db â + b i n a r y + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y = â â
|
2307.13528#108
|
2307.13528#110
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#110
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 d e c i m a l = d e c i m a l b i n a r y = s t r ( r e m a i n d e r ) + b i n a r y / / 2 r e t u r n â db â + b i n a r y + â db â d e f d e c i m a l _ t o _ b i n a r y ( d e c i m a l ) : b i n a r y _ l i s t = [ ] w h i l e d e c i m a l > 0 : r e m a i n d e r = d e c i m a l % 2 b i n a r y _ l i s t . a p p e n d ( s t r ( r e m a i n d e r ) ) d e c i m a l = d e c i m a l b i n a r y _ l i s t . r e v e r s e ( ) b i n a r y _ s t r i n g = â â . j o i n ( b i n a r y _ l i s t ) r e t u r n â db â + b i n a r y _ s t r i n g + â db â / / 2
|
2307.13528#109
|
2307.13528#111
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#111
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
], "exec_result": "db1010db"], ["db11111111db", "db11111111db", "db11111111db", "db11111111db"], ["dbdb", "dbdb", "dbdb", "dbdb"]] } [["db1010db", "db1010db", "db1010db", Figure 16: Some error cases of FACTOOL on code. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬ nal decision is also reported. [Round-off error] { "math_calculation": "60444034 / 12", "calculated_answer": "5037002.83", "label": true, "query": "python_snippet": "calculation = 60444034 / 12; answer = 5037002.83; print(abs(calculation - answer) < 1e-9)", "exec_result": "False", "with_tool_classiï¬ cation": false } [Reasoning error] { "input":
|
2307.13528#110
|
2307.13528#112
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#112
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
"Kylar went to the store to buy glasses for his new apartment. One glass costs $5, but every second glass costs only 60% of the price. Kylar wants to buy 5364765 glasses. How much does he need to pay for them?", "target": 21459061.0, "response": "Letâ s assume that Kylar buys x glasses at the full price of $5 each. Therefore, he will buy (5364765 - x) glasses at a discounted price of 60% of $5, which is $3. Now, we can set up the equation: 5x + 3(5364765 - x) = total cost Simplifying the equation, we get: 5x + 16094295 - 3x = total cost 2x + 16094295 = total cost Therefore, the total cost of all the glasses will be 2x + 16094295 dollars.
|
2307.13528#111
|
2307.13528#113
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#113
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
Now, we just need to ï¬ nd the value of x. We know that Kylar wants to buy 5364765 glasses, so we can substitute this value in the equation: 2x + 16094295 = total cost 2x + 16094295 = 5,364,765 * 5 2x + 16094295 = 26823825 2x = 10729530 x = 5364765 Therefore, Kylar needs to pay 2x + 16094295 dollars, which is: 2(5364765) + 16094295 = 26823825. Hence, Kylar needs to pay $26823825 for all the glasses.", "answer": "26823825", "annotation": false, "claims": [{"math_calculation": "5364765 - x", "calculated_answer": "5364765 - x", "label": true}, {"math_calculation": "0.60 * 5", "calculated_answer": "3", "label": true}, {"math_calculation": "5x", "calculated_answer": "5x", "label": true}, {"math_calculation": "3(5364765 - x)", "calculated_answer": "16094295 - 3x", "label": true}, {"math_calculation": "5x + 16094295 - 3x", "calculated_answer": "16094295 + 2x", "label": true}, {"math_calculation": "2x + 16094295", "calculated_answer": "total cost", "label": "null"}, {"math_calculation": "5364765 * 5", "calculated_answer": "26823825", "label": true}, {"math_calculation": "2(5364765)", "calculated_answer": "10729530", "label": true}, {"math_calculation": "2x", "calculated_answer": "10729530", "label": "null"}] "math_calculation": "5364765 - x", "calculated_answer": "5364765 - x", "label": true, "query": "python_snippet":
|
2307.13528#112
|
2307.13528#114
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#114
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
"x = 0; math_calculation = 5364765 - x; calculated_answer = 5364765 - x; print(math_calculation == calculated_answer)", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "calculation = 0.60 * 5; answer = 3; print(calculation == answer)", "0.60 * 5", "calculated_answer": "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "5x", "calculated_answer": "5x", "label": true, "query": "python_snippet": "def val- idate_calculation(calculation, answer): return calculation == answer; print(validate_calculation(â
|
2307.13528#113
|
2307.13528#115
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#115
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
5xâ , â 5xâ ))", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "3(5364765 - x)", "calculated_answer": "16094295 - 3x", "label": true, "query": "python_snippet": "x = 1; result1 = 3 * (5364765 - x); result2 = 16094295 - 3 * x; print(result1 == result2)", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "5x + 16094295 - 3x", "calculated_answer": "16094295 + 2x", "label": true, "query": "python_snippet": "x = 1; print((5 * x + 16094295 - 3 * x) == (16094295 + 2 * x))", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "2x + 16094295", "calculated_answer": "total cost", "label": "null", "query": "python_snippet": "x = 5; math_calculation = 2 * x + 16094295; calculated_answer = 16094305; print(math_calculation == calculated_answer)", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "5364765 * 5", "calculated_answer": "26823825", "label": true, "query": "python_snippet": "calcula- tion = 5364765 * 5; answer = 26823825; print(calculation == answer)", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "2(5364765)", "calculated_answer": "10729530", "label": true, "query": "python_snippet": "calculation = 2 * 5364765; answer = 10729530; print(calculation == answer)", "exec_result": "True", "with_tool_classiï¬ cation": true "math_calculation": "2x", "calculated_answer": "10729530", "label": "null", "query": "python_snippet":
|
2307.13528#114
|
2307.13528#116
|
2307.13528
|
[
"2110.14168"
] |
2307.13528#116
|
FacTool: Factuality Detection in Generative AI -- A Tool Augmented Framework for Multi-Task and Multi-Domain Scenarios
|
"x = 5364765; print(2 * x == 10729530)", "exec_result": "True", "with_tool_classiï¬ cation": true } Figure 17: Some error cases of FACTOOL on math. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬ nal decision is also reported. [Errors in title matching] { "paper_title": "MDMA-assisted psychotherapy for treatment of PTSD: study design and rationale for phase 3 trials based on pooled analysis of six phase 2 randomized controlled trials", "paper_author(s)": "Mithoefer et al.", "paper_pub_year": "2019", "label": true, "with_tool_classiï¬ cation": false, "error": ["wrong_paper_title"] } [Errors in author matching] { "paper_title": "Language Models are Unsupervised Multitask Learners", "paper_author(s)": "Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever", "paper_pub_year": "2019", "label": true, "with_tool_classiï¬ cation": false, "error": ["wrong_paper_author(s)"] } Figure 18: Some error cases of FACTOOL on scientiï¬ c. The full response of FACTOOL powered by GPT-4 is reported, including the queries generated, evidences collected, the reasoning process, and the ï¬ nal decision is also reported.
|
2307.13528#115
|
2307.13528
|
[
"2110.14168"
] |
|
2308.02439#0
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
3 2 0 2 l u J 5 2 ] Y C . s c [ 1 v 9 3 4 2 0 . 8 0 3 2 : v i X r a # A large language model-assisted education tool to provide feedback on open-ended responses # Jordan K. Matelsky Richard D. Lange 1,2, Felipe Parodi 3, Tony Liu 4, 1,5, and Konrad P. Kording 1,3,4,6 1Department of Bioengineering, University of Pennsylvania; 2Research & Exploratory Development Department, Johns Hopkins University Applied Physics Laboratory; 3Department of Neuroscience, University of Pennsylvania; 4Department of Computer Science, University of Pennsylvania; 5Department of Computer Science, Rochester Institute of Technology; 6CIFAR LMB Program
|
2308.02439#1
|
2308.02439
|
[
"2106.01399"
] |
|
2308.02439#1
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Open-ended questions are a favored tool among instructors for assessing student understand- ing and encouraging critical exploration of course material. Providing feedback for such responses is a time-consuming task that can lead to over- whelmed instructors and decreased feedback qual- ity. Many instructors resort to simpler question for- mats, like multiple-choice questions, which provide immediate feedback but at the expense of person- alized and insightful comments. Here, we present a tool that uses large language models (LLMs), guided by instructor-defined criteria, to automate responses to open-ended questions. Our tool deliv- ers rapid personalized feedback, enabling students to quickly test their knowledge and identify areas for improvement. We provide open-source reference implementations both as a web application and as a Jupyter Notebook widget that can be used with instructional coding or math notebooks. With in- structor guidance, LLMs hold promise to enhance student learning outcomes and elevate instructional methodologies. Large language models | Automated learning assessment | Automated grading | Education Correspondence: [email protected]
|
2308.02439#0
|
2308.02439#2
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#2
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
# Introduction Open-ended questions â questions that require students to produce multi-word, nontrivial responses in educational en- â are a popular assessment tool vironments because they offer students the chance to explore their understanding of learning material. Such questions provide valuable insight into studentsâ grasp of complex concepts and their problem-solving ap- proaches. However, grading open-ended questions can be time-consuming, subjective, and â especially in the case of large class sizes â prone to attentional errors. These factors create a critical bottleneck in precision education. Large Language Models (LLMs) present an op- portunity to automate and promote equity in learning assessments, providing rapid valuable feedback to stu- dents while reducing the burden on instructors. We developed a tool that automatically assesses studentsâ responses to open-ended questions by evaluating their responses against a set of instructor-defined criteria. To use our tool, the instructor poses a question along with optional grading criteria. Students respond to these questions, and their answers are relayed to a server. The responses are paired with the grading cri- teria (which are not revealed to the student), forming a payload for a large language model (LLM). The LLM then generates automated feedback, suggesting areas for improvement to the student. Here, we describe the technical design of our tool, FreeText, and showcase its utility in educational envi- ronments spanning topics and complexity. We further outline the implications of our work for teaching com- plex subjects, and the potential role of large language models in education (Fig. 1). We share our source code and a public URL (see Supplemental Materials), allow- ing educators to experiment with FreeText firsthand.
|
2308.02439#1
|
2308.02439#3
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#3
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
. . SUPERHUMAN @ Multiple Choice GRADING Heuristics Autograders @ Faster technology LLM Autograders Better prompts Throughput Human graders @ Feedback Quality Figure 1. Sketch comparing grading throughput and quality of feed- back to students among various assessment methodologies The y -axis represents throughput (i.e., rapidity of feedback generation and number of assignments evaluated per real-world unit-time or cost), and the x-axis represents feedback quality (a qualitative measure of personalization and detail of feedback given to students). LLMs have the potential to fill a niche among educational tools by striking a balance between quantity and quality, delivering high throughput with feedback quality comparable to hu- man graders. Improvements in technology (faster GPU cards, better LLM architectures) will continue to push throughput upward, and improvements in prompt design (or other domain-specific adaptations) will improve the quality of LLM-generated feedback. Matelsky et al. | Körding Lab | August 7, 2023 | 1â
|
2308.02439#2
|
2308.02439#4
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#4
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
7 # Related Work Automated grading is a longstanding pursuit in the field of education technology. Early automated grading tools focused on â solvableâ tasks like math or program- ming assignments, where grading generally relies on unit tests or direct output comparisons (Hollingsworth, 1960; Ureel II and Wallace, 2019; Orr and Russell, 2021; Messer et al., 2023). These approaches often overlook less easily-quantified but nonetheless critical indicators of learning and understanding, such as design quality, code maintainability, or potential areas of stu- dent confusion. Modern tools, like AutoGrader, which provides real-time grading for programming exercises, remain narrowly focused on output correctness and do not sufficiently account for documentation or maintain- ability (Liu et al., 2019). Assessing studentsâ understanding from natural language responses, however, presents different chal- lenges and has seen significant evolution. Early Au- tomated Short Answer Grading (ASAG) models em- ployed statistical or domain-specific neural network ap- proaches (Heilman and Madnani, 2013; Riordan et al., 2017; Sung et al., 2019). In recent years, LLMs have been shown to outperform domain-specific language models (Radford et al., 2019; Mizumoto et al., 2019; Brown et al., 2020; Chung et al., 2022). LLMs fa- cilitate grading of open-ended assignment responses, without the need for task-specific fine-tuning (Cao, 2023; Mizumoto and Eguchi, 2023; Yoon, 2023). How- ever, Kortemeyer (2023) revealed that while LLMs like GPT-4 could be useful for preliminary grading of introductory physics assignments, they fell short for natural-language responses required in comprehensive exam grading. Further, while LLMs like GitHub Copilot streamline the process of code generation and review, they can fall short on more nuanced programming tasks and open-ended evaluation (Finnie-Ansley et al., 2022). Thus, in their current state, LLMs should be treated as a useful but fallible tool, with final assessments still in the hands of (human) instructors.
|
2308.02439#3
|
2308.02439#5
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#5
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
It is also important to consider how students per- ceive AI graders and how automated graders are de- ployed to educational settings (Burrows et al., 2015; Saha et al., 2019; Zhu et al., 2022). Many comment on the socio-technical dynamics of automated grading, including the potential for introduction of machine bias (e.g., Hsu et al. (2021)). The use of NLP for short answer grading is not a trivial task and has been set as an evaluation challenge in its own right (Dzikovska et al., 2013). To address the evolving needs of grading open- ended responses, our framework proposes four key en- hancements. First, it is specifically designed for open- ended questions, which are not typically well-served by the rubric-based grading of most ed-tech tools.
|
2308.02439#4
|
2308.02439#6
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#6
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Sec- Matelsky et al. | Harnessing large language models for education ond, our system leverages LLMs to deliver rapid, per- sonalized feedback for student responses without ex- plicitly attempting to produce a quantitative grade. Third, our framework introduces a feedback loop to continually improve instructor-provided prompts, ques- tion suggestions, and grading criteria. Lastly, our tool integrates with the Jupyter Notebook environment, ex- tensively utilized in fields such as computer science, data science, and statistics. # Approach We have designed our tool for use in a variety of ed- ucational contexts, ranging from primary school educa- tion to graduate courses. FreeText enables educators to integrate open-ended questions into their curriculum without incurring an instructor labor cost. This allows students to gain rapid, individualized, and sophisticated feedback, thereby creating a highly effective learning loop that can enhance the absorption of course ma- terials. It guides students in refining their responses, enhancing their understanding and application of con- cepts in each iteration. This feedback is generated by a large language model (LLM), which circumvents the attentional errors often made by human graders, par- ticularly when assessing a large volume of assignments. The LLM is capable of delivering intricate responses to students swiftly, as demonstrated by the examples provided in Table 1.
|
2308.02439#5
|
2308.02439#7
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#7
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Our software is packaged as a Python library. LLM interactions are handled by the Guidance Python pack- age (Microsoft, 2023). User interfaces and a JSON HTTP API are supported by FastAPI (Lathkar, 2023). We support traditional (e.g., JSON files, SQLite) as well as cloud-based data storage drivers. Our server can be run at low financial and computational cost through the combination of serverless deployment (e.g., to AWS Lambda) and serverless databases (e.g., AWS DynamoDB).
|
2308.02439#6
|
2308.02439#8
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#8
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Student responses are not stored by Free- Text infrastructure by default. Any Guidance-compatible LLM may be swapped into the Freetext server. That is, by default we access LLMs through the OpenAI API, but it is easy to swap in locally hosted or fine-tuned models: thus, privileged or sensitive information may be kept to on-premise com- pute resources, or users may opt to change which API- based LLM is accessed. For example, a more powerful LLM may be selected in cases where course content is particularly complex, or a simpler model may be used for more elementary course content. One front-end that students can access is a Jupyter Notebook widget, developed using IPyWidgets (Kluyver et al., 2016), making it easy to incorporate language short-answer questions as part of a natural notebook-based active-learning environment. The widget communicates with the backend
|
2308.02439#7
|
2308.02439#9
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#9
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Körding Lab | 2 Python server described above. The widget is de- signed to be easily integrated into lecture and home- work notebooks, enabling instructors to easily enrich existing teaching materials. A distinctive feature of our system is the intermediary server which equips the large language model with â held-outâ information, such as a rubric for correct responses, accessible only to the LLM and instructor, and not to the student. This establishes the useful informational asymmetry between the evalu- ator and the student. To include the widget in a Python environment, the instructor can include the following code: !pip install freetext_jupyter from freetext_jupyter import FreetextWidget FreetextWidget(
|
2308.02439#8
|
2308.02439#10
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#10
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
# This ID is generated by the instructor. "07b2c3ef-0f97-46bc-a11e-..." ) When executed in a Jupyter notebook cell, this code will access the HTTP API to replace the widget with the corresponding question text for the student. Upon encountering the widget in a notebook, the stu- dent is presented with an open-ended question accom- panied by a text box for response input. When they submit their response, the system transmits it to the server for combination with the feedback criteria set by the instructor. In the next stage, the student response and the pre-defined feedback criteria are bundled into a payload dispatched to a large language model. The LLM pro- cesses this payload and produces personalized feedback to the response. This feedback is relayed back to the student with seconds of latency through the web or notebook interface, offering them the immediate op- portunity to reflect, amend, and improve their response as desired (Fig. 2). Our tool is designed to be easily deployable and scalable. The FreeText server can be run in resource-constrained or serverless platforms such as AWS Lambda. This allows for easy deployment and scaling, which is particularly important for large-scale projects and massive-scale courses (van Viegen et al., 2021). Our API can also be combined with other ex- isting educational tools in order to capture and store student responses for instructor review. # Question Design Instructors can provide a question for students to answer â
|
2308.02439#9
|
2308.02439#11
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#11
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
either programmatically, by accessing our HTTP API â or graphically in the browser using the simple web application UI. Instructors can also provide optional assessment criteria â text like â make sure the student mentions DNA base pairs in their answer.â Matelsky et al. | Harnessing large language models for education FreeText can use question content to automat- ically establish grading criteria, or it can use the as- sessment criteria to improve the text of the question. The latter process works by asking the AI to serve as a student and answer a question while oblivious to the instructorâ s grading criteria. Then, the answer is auto- matically evaluated by a separate instantiation of the LLM â this time, against the instructor criteria. The assessment model determines if the student has been unfairly penalized due to omission of requirements (or a lack of clarity) in the original question text. If so, the question is updated to better encompass the require- ments of the grading criteria. This process of iteratively incorporating assess- ment criteria is subtly different from simply including the criteria in the question text: For example, if the question text is, â
|
2308.02439#10
|
2308.02439#12
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#12
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
What is the Rosetta Stone?â and the criteria include, â Mention why the Ptolemaic dynasty created the Rosetta Stoneâ , a bad question update would be to explicitly ask about the Egyptian politi- cal system, as this gives the student more information than the instructor originally intended. A better ques- tion update would be â Explain what the Rosetta Stone is and the context of its creation,â because this nudges the student to discuss the right material but does not give any new information. # Question Presentation There are two built-in methods to present ques- the first is a simple web API, tions to students: which can be used standalone, coupled with response- collection tools, or embedded within other web applica- tions. The second is a Jupyter Notebook widget that can be embedded in tutorial coding notebooks. The JSON web API endpoints may be accessed directly by application code, or students can access a simple web user interface. This interface comprises a question display and a textbox for student responses (see Supplemental Materials). Feedback to students is rendered beneath the response box upon answer sub- mission, and students may reuse the same page to re- submit amended answers. The Jupyter Notebook widget is designed to make it easy for instructors to include open-ended questions in their assignments and subject the grading of student responses to custom grading criteria. This flexibility makes it easy for instructors to tailor the tool to their specific needs and teaching style. # Feedback to Students Our tool provides two types of feedback to stu- dents. The first is a holistic text response that provides feedback on the entire answer as a whole. The second is span-bound feedback (referring to a specific substring of the response) that can be used to highlight specific parts of the text that are erroneous or otherwise need
|
2308.02439#11
|
2308.02439#13
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#13
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
# Körding Lab | 3 A Educator APL DB Question & Grading Criteria, Question & Grading Criteria Question ID Educator APL DB from Freetext_jupyter import Freetextitidget widget = FreetextWidget("07b2c3ef-0F97-46bc-at1e-fcSc06c381c2") widget. display() User APL DB LLM Student Response ee Response Vi and Attack Question Retrieval Ll" Question & Prompt Payload = Question & Response Combined Prompt Payload Evaluation Validation + Validated Response «<â ________1 User API DB LLM Figure 2. A sequence diagram illustrating the flow of information within the FreeText system. A. First, an instructor formulates a question by supplying a student-facing question (â Questionâ ) along with grading criteria for the LLM to evaluate student responses. In return, the educator obtains a unique identifier from the database, instrumental in retrieving the question text in the following step. B. Equipped with a unique Question identifier, a student provides an answer to the educatorâ s query (â Responseâ ). The API receives this request, pairing the Response with a Prompt based upon the educatorâ s question and criteria, and directs them towards a large language model for evaluation. C.
|
2308.02439#12
|
2308.02439#14
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#14
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
A screenshot of the FreeText Jupyter widget integrated into an interactive code notebook. student attention. For example, if a studentâ s answer is correct but they misattribute a quote, the FreeText server could highlight the attribution specifically to give feedback. The type of feedback returned can be speci- fied by the instructor during question creation. # Discussion Here we introduced FreeText, a framework capa- ble of defining questions, collecting student responses, transmitting these responses alongside instructor ex- pectations to a large language model (LLM), and gener- ating rapid and personalized feedback for the students. Notably, the entirety of the student-facing workflow can be encapsulated within a Jupyter notebook, facilitating real-time enhancement of studentsâ understanding of the course material. FreeText is not confined to a web application and Jupyter notebooks, or the academic subjects mentioned above. The FreeText Server can integrate with any application that consumes a JSON HTTP API, expanding its potential to a wider range of educational settings.
|
2308.02439#13
|
2308.02439#15
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#15
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Our systemâ s broad applicability becomes evident when considering diverse learning models, such as the pod-based approach adopted by the online course Neu- romatch Academy (van Viegen et al., 2021) in the field of computational neuroscience. In such settings, small student groups or â podsâ collaboratively tackle assign- ments and projects. Teaching Assistants, tasked with providing feedback, can benefit from our tool, as it can streamline grading processes, reducing potential for attentional errors and freeing up instructors to deliver more personalized guidance to students. Fully automated student evaluation is challenging both from a technical perspective and from a human perspective, and thus FreeText is designed not to fully automate grading, but to serve as a useful tool ben- efiting both students and instructors. FreeText bene- fits students by providing rapid and personalized feed- back on short-answer questions. FreeText benefits in- structors by helping them to design better questions and grading criteria, by providing first-pass material for learning assessments, and by alleviating some of the burden of providing individualized instruction in large classes. LLMs in general, and FreeText specifically, are not a replacement human instructors, but they can nonetheless fill a niche among education technologies. LLMs undoubtedly hold immense power and po- tential. However, it is crucial to have an in-depth dis- cussion about their ethical implications, especially in ed- ucation. A key issue to consider is the potential biases that LLMs can introduce. These biases could uninten- tionally touch on sensitive subjects or unintentionally overlook marginalized groups. Instructors have a role to play by carefully designing their questions and as- sessment criteria. Further, students should be made aware of the nature of the system they are interacting with and its potential to make mistakes or act on inter- nalized biases (Hsu et al., 2021). On the other hand, automated systems such as FreeText present an op- portunity to reduce instructorsâ unconscious biases by evaluating all studentsâ responses equally and without any explicit identification.
|
2308.02439#14
|
2308.02439#16
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#16
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Furthermore, we must consider the broader dy- namics of the AI ecosystem. The realm of LLMs is not limited to the offerings of large AI conglomerates like OpenAI. A burgeoning industry of alternative LLMs, both from smaller commercial entities and open-source initiatives (Anthropic, 2023; Taori et al., 2023; Tou- vron et al., 2023; Wolf et al., 2020), is flourishing. Our Matelsky et al. | Harnessing large language models for education
|
2308.02439#15
|
2308.02439#17
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#17
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Körding Lab | 4 framework is designed to be model-agnostic and can be readily adapted to integrate these alternative models. Reliance solely on models from a single entity such as OpenAI raises two significant concerns. First, it centralizes the concentration of AI development re- sources and power, thereby exacerbating the already pronounced inequalities in the global AI landscape. Sec- ond, it can lead to a homogenization of the knowledge and perspectives propagated by AI models, potentially resulting in a limited and biased worldview.
|
2308.02439#16
|
2308.02439#18
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#18
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
FreeText is therefore deliberately agnostic to the underlying LLM model and technologies. We intend for our tool to enrich and expand stu- dentsâ educational experience, particularly in large-scale or resource-constrained course settings where detailed human intervention may be limited. Ongoing work in- cludes the careful critique and evaluation of FreeText outputs by expert instructors, taking advantage of up- coming opportunities to apply this technology in a large class setting. Embracing both technical as well as human diver- sity helps mitigate many of the concerns raised above and enriches the AI ecosystem. A broad range of per- spectives stalls the monopolization of AI technology and fosters a more balanced, equitable, and robust AI land- scape. This viewpoint aligns with our belief in the need for broad and diverse human inputs, both in the creation of AI models and in their applications in society.
|
2308.02439#17
|
2308.02439#19
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#19
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
# Supplemental Materials Full-resolution versions of all images and ta- bles from this publication are available at https:// llm4edu.experiments.kordinglab.com/paper. The FreeText server will be hosted temporarily for public use at https://llm4edu.experiments. kordinglab.com/app, with an interactive exam- ple https://llm4edu. experiments.kordinglab.com/app/assignments/ 1393754a-d80f-474d-bff7-b1fec36cdbb7. Educa- tors may contact us at the correspondence email of this preprint for a token, which is required to create new questions on our public instance. available on GitHub at https://github.com/KordingLab/ freetext-jupyter, and is powered by the FreeText Server, which can be found at https://github.com/ KordingLab/llm4teach-freetext-server. # Acknowledgements Research in this publication was supported by the National Institutes of Health under award number UC2- NS128361. The content is solely the responsibility of the authors and does not necessarily represent the of- ficial views of the National Institutes of Health.
|
2308.02439#18
|
2308.02439#20
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#20
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Matelsky et al. | Harnessing large language models for education # Bibliography Anthropic. Claude, 2023. URL https://www.anthropic.com. Accessed: 24 July 2023. T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakan- tan, P. Shyam, G. Sastry, A. Askell, S. Agarwal, A. Herbert-Voss, G. Krueger, T. Henighan, R. Child, A. Ramesh, D. Ziegler, J. Wu, C. Winter, C. Hesse, M. Chen, E. Sigler, M. Litwin, S. Gray, B. Chess, J. Clark, C. Berner, S. Mc- Candlish, A. Radford, I. Sutskever, and D.
|
2308.02439#19
|
2308.02439#21
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#21
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877â 1901. Curran Associates, Inc., 2020. S. Burrows, I. Gurevych, and B. Stein. The eras and trends of automatic short answer intelligence in education, 25:60â 117, grading. 2015. International journal of artificial C. Cao. Leveraging large language model and story-based gamification in intelligent tutoring system to scaffold introductory programming courses: A design-based research study, 2023.
|
2308.02439#20
|
2308.02439#22
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#22
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
H. W. Chung, L. Hou, S. Longpre, B. Zoph, Y. Tay, W. Fedus, Y. Li, X. Wang, M. De- hghani, S. Brahma, A. Webson, S. S. Gu, Z. Dai, M. Suzgun, X. Chen, A. Chowd- hery, A. Castro-Ros, M. Pellat, K. Robinson, D. Valter, S. Narang, G. Mishra, A. Yu, V. Zhao, Y. Huang, A. Dai, H. Yu, S. Petrov, E. H. Chi, J. Dean, J. Devlin, A. Roberts, D. Zhou, Q. V. Le, and J. Wei. Scaling instruction-finetuned language models, 2022. M. O. Dzikovska, R. Nielsen, C. Brew, C. Leacock, D. Giampiccolo, L. Bentivogli, P. Clark, I. Dagan, and H. T. Dang.
|
2308.02439#21
|
2308.02439#23
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#23
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Semeval-2013 task 7: The joint student re- sponse analysis and 8th recognizing textual entailment challenge. In Second Joint Conference on Lexical and Computational Semantics (* SEM), Volume 2: Pro- ceedings of the Seventh International Workshop on Semantic Evaluation (SemEval 2013), pages 263â 274, 2013. J. Finnie-Ansley, P. Denny, B. A. Becker, A. Luxton-Reilly, and J. Prather.
|
2308.02439#22
|
2308.02439#24
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#24
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
The robots are coming: Exploring the implications of openai codex on introductory program- ming. In Proceedings of the 24th Australasian Computing Education Conference, pages 10â 19, 2022. M. Heilman and N. Madnani. ETS: Domain adaptation and stacking for short an- swer scoring. In Second Joint Conference on Lexical and Computational Semantics (*SEM), Volume 2: Proceedings of the Seventh International Workshop on Se- mantic Evaluation (SemEval 2013), pages 275â
|
2308.02439#23
|
2308.02439#25
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#25
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
279. Association for Computational Linguistics, 2013. J. Hollingsworth. Automatic graders for programming classes. Communications of the ACM, 3(10):528â 529, 1960. ISSN 0001-0782. doi: 10.1145/367415.367422. S. Hsu, T. W. Li, Z. Zhang, M. Fowler, C. Zilles, and K. Karahalios.
|
2308.02439#24
|
2308.02439#26
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#26
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Attitudes surrounding an imperfect ai autograder. In Proceedings of the 2021 CHI conference on human factors in computing systems, pages 1â 15, 2021. T. Kluyver, B. Ragan-Kelley, F. Pérez, B. Granger, M. Bussonnier, J. Frederic, K. Kel- ley, J. Hamrick, J. Grout, S. Corlay, P. Ivanov, D. Avila, S. Abdalla, and C.
|
2308.02439#25
|
2308.02439#27
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#27
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Willing. Jupyter notebooks â a publishing format for reproducible computational workflows. In F. Loizides and B. Schmidt, editors, Positioning and Power in Academic Pub- lishing: Players, Agents and Agendas, pages 87 â 90. IOS Press, 2016. G. Kortemeyer. Can an AI-tool grade assignments in an introductory physics course?, 2023. In High-Performance Web Apps with FastAPI: The Asynchronous Web Framework Based on Modern Python, pages 29â
|
2308.02439#26
|
2308.02439#28
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#28
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
64. Springer, 2023. X. Liu, S. Wang, P. Wang, and D. Wu. Automatic grading of programming assign- ments: An approach based on formal semantics. In 2019 IEEE/ACM 41st In- ternational Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET), pages 126â 137, 2019. doi: 10.1109/ICSE-SEET. 2019.00022.
|
2308.02439#27
|
2308.02439#29
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#29
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
M. Messer, N. C. C. Brown, M. Kölling, and M. Shi. Automated grading and feedback tools for programming education: A systematic review, 2023. Microsoft. Guidance. https://github.com/microsoft/guidance, 2023. Accessed: 24 July 2023. A. Mizumoto and M. Eguchi. Exploring the potential of using an AI language model for automated essay scoring. Research Methods in Applied Linguistics, 2(2):100050, 2023. ISSN 2772-7661. doi: 10.1016/j.rmal.2023.100050.
|
2308.02439#28
|
2308.02439#30
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#30
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
T. Mizumoto, H. Ouchi, Y. Isobe, P. Reisert, R. Nagata, S. Sekine, and K. Inui. Analytic score prediction and justification identification in automated short answer scoring. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pages 316â 325. Association for Computational Linguistics, 2019. doi: 10.18653/v1/W19-4433. J. W. Orr and N. Russell.
|
2308.02439#29
|
2308.02439#31
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#31
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Automatic assessment of the design quality of python programs with personalized feedback. arXiv preprint arXiv:2106.01399, 2021. A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever. Language models are unsupervised multitask learners. 2019. Investigating neural architectures for short answer scoring. In Proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pages 159â
|
2308.02439#30
|
2308.02439#32
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#32
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
168. As- sociation for Computational Linguistics, 2017. doi: 10.18653/v1/W17-5017. S. Saha, T. I. Dhamecha, S. Marvaniya, P. Foltz, R. Sindhgatta, and B. Sengupta. Joint multi-domain learning for automatic short answer grading. arXiv preprint arXiv:1902.09183, 2019. C. Sung, T. Dhamecha, S. Saha, T. Ma, V. Reddy, and R. Arora.
|
2308.02439#31
|
2308.02439#33
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#33
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Pre-training BERT on domain resources for short answer grading. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP- IJCNLP), pages 6071â 6075. Association for Computational Linguistics, 2019. doi: 10.18653/v1/D19-1628. I. Gulrajani, T. Zhang, Y. Dubois, X. Li, C. Guestrin, P. Liang, and # r= Körding Lab | 5 Table 1. Feedback examples. # Instructor-
|
2308.02439#32
|
2308.02439#34
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#34
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Explain what a neuron is, detailing how they trans- mit information and what unique features they have. Criteria Must include the terms â synapseâ and â action po- Neurons are cells that transmit. infor- mation to other nerve, muscle, or gland cells. They use synapses. Student-Generated Al-Generated Generated A Feedback Prompt nswer eedbac Question â You correctly state that neurons are cells that transmit information to other nerve, muscle, or gland cells. â You also mention the use of synapses, which is correct as synapses are the connections between neurons where information is transmitted â
|
2308.02439#33
|
2308.02439#35
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#35
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
However, your response is quite brief and lacks important details and explanations. Here are some points that you may have missed or misunderstood â You did not provide any information about how neurons transmit information. It would be helpful to explain that neurons transmit information through electrical impulses called action potentials. â You did not mention the role of neurotransmitters in transmitting information between neurons. It is important to explain that neurotransmitters are chemical messengers released at the synapse to transmit the signal from one neuron to the next. â Your response could benefit from more specific and detailed information about the unique features of neurons, such as the three main parts of a neuron (cell body, dendrites, What are the components and structure of a mole- cule of DNA? Criteria Mention base pairs, sugar, and phosphate. Describe that DNA is a double helix. Note that base pairs pair up in a specific way using hydrogen bonds (AT and GC). DNA is a complex molecule and it is shaped like a double helix ladder, where the rungs are base pairs ATGC and the scaffold is sugars and phosphates. The base pairs bind (A with G) and (C with T) using hydrogen bonds, which can be separated when the DNA is being read or duplicated tential.â
|
2308.02439#34
|
2308.02439#36
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#36
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
. and axon) and how the action potential is propagated along the axon. Overall, your re- Must. mention the role of sponse is a good start, but it would be more complete and accurate if you include the neurotransmitters. missing details mentioned above. Keep up the effort! Question Correct points: â You correctly mentioned that DNA is shaped like a double helix ladder. â You accurately stated that the base pairs in DNA are ATGC. â You correctly noted that the base pairs bind using hydrogen bonds.
|
2308.02439#35
|
2308.02439#37
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#37
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Points to improve: â You mentioned that the base pairs bind as A with G and C with T, but it's important to clarify that adenine pairs with thymine (A-T) and guanine pairs with cytosine (G-C). Overall, you have a solid understanding of the components and structure of DNA. Just make sure to clarify the pairing of base pairs and the temporary breaking of hydrogen bonds during DNA processes. Good work! Question How can large language models introduce biases into student evaluation? LLMs have bias because their training data can have toxic, biased, or inaccurate data in it. When evaluating students, LLMs could also penalize students that know in- formation that is more recent or otherwise outside the LLMs training set, which may appear to be inaccurate to the Al model LLMs are also not designed to keep track of accurate information; they are autore- Great job in identifying some of the key factors that contribute to biases in large language models (LLMs) in student evaluation. It would be helpful to provide specific examples or evidence of how LLMs can have biased or inaccurate data in their training set.
|
2308.02439#36
|
2308.02439#38
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#38
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
This would strengthen your argument and provide more clarity. â Your statements align with the current understanding of LLMs and their limitations. Overall, your response demonstrates a good understanding of the topic. To further en- How can large language models introduce biases into student evaluation? Criteria (None provided) LLMs are biased because they're made by large companies. â teri hance your answer, consider providing more specific examples and expanding on the poten- Criteria gressive language models, and so they do | 4:3) implications of relying on LLMs for student evaluation. Great job! not have a legitimate hold on fact and cau (None provided) tion should be used when depending on an {AI model for subtle communication Question Your response to the prompt raises an important point, but there are a few areas where your answer could be elaborated. Large language models (LLMs) are indeed created by large companies, but itâ s important to note that the fact that LLMs are made by large companies does not directly imply bias. Bias can arise from various factors related to the training data and model design. Consider mentioning factors such as the source of training data, language patterns, prej- udiced examples, feedback loops, lack of contextual understanding, and how evaluation metrics may steer LLM responses. Overall, your response is a good starting point, but it would benefit from further elaboration and a deeper understanding of the specific factors that contribute to biases in student evaluation by large language models.
|
2308.02439#37
|
2308.02439#39
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#39
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Keep up the effort! Matelsky et al. | Harnessing large language models for education # Körding Lab | 6 T. B. Hashimoto. Alpaca: A strong, replicable instruction-following model. Stanford Center for Research on Foundation Models. https://crfm. stanford. edu/2023/03/13/alpaca. html, 3(6):7, 2023. H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, et al.
|
2308.02439#38
|
2308.02439#40
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#40
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288, 2023. L. C. Ureel II and C. Wallace. Automated critique of early programming antipatterns. In Proceedings of the 50th ACM Technical Symposium on Computer Science Edu- cation, SIGCSE â 19, pages 738â 744. Association for Computing Machinery, 2019. ISBN 978-1-4503-5890-3. doi: 10.1145/3287324.3287463. T. van Viegen, A. Akrami, K. Bonnen, E. DeWitt, A. Hyafil, H. Ledmyr, G. W. Lindsay, P. Mineault, J. D. Murray, X. Pitkow, et al. Neuromatch academy: Teaching computational neuroscience with global accessibility. Trends in cognitive sciences, 25(7):535â
|
2308.02439#39
|
2308.02439#41
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#41
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
538, 2021. T. Wolf, L. Debut, V. Sanh, J. Chaumond, C. Delangue, A. Moi, P. Cistac, C. Ma, Y. Jernite, J. Plu, C. Xu, T. Le Scao, S. Gugger, M. Drame, Q. Lhoest, and A. M. Rush. Transformers: State-of-the-Art Natural Language Processing. pages 38â 45. Association for Computational Linguistics, Oct. 2020. URL https://www.aclweb. org/anthology/2020.emnlp-demos.6.
|
2308.02439#40
|
2308.02439#42
|
2308.02439
|
[
"2106.01399"
] |
2308.02439#42
|
A large language model-assisted education tool to provide feedback on open-ended responses
|
S.-Y. Yoon. Short answer grading using one-shot prompting and text similarity scoring model, 2023. X. Zhu, H. Wu, and L. Zhang. Automatic short-answer grading via bert-based deep IEEE Transactions on Learning Technologies, 15(3):364â 375, neural networks. 2022. Matelsky et al. | Harnessing large language models for education # Körding Lab | 7
|
2308.02439#41
|
2308.02439
|
[
"2106.01399"
] |
|
2307.13692#0
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
3 2 0 2 l u J 8 2 ] L C . s c [ 2 v 2 9 6 3 1 . 7 0 3 2 : v i X r a # ARB: Advanced Reasoning Benchmark for Large Language Models # Tomohiro Sawada1,2,â , Daniel Paleka1,3, Alexander Havrilla1,2, Pranav Tadepalli1,2, Paula Vidas1, Alexander Kranias1,2, John J. Nay4,5, Kshitij Gupta1,6, Aran Komatsuzaki1,2,â ¡â ¡
|
2307.13692#1
|
2307.13692
|
[
"2212.14402"
] |
|
2307.13692#1
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
1 DuckAI 2 Georgia Tech 3 ETH Zürich 4 Nomos AI 5 Stanford University Center for Legal Informatics 6 MILA # Abstract Large Language Models (LLMs) have demonstrated remarkable performance on various quantitative reasoning and knowledge benchmarks. However, many of these benchmarks are losing utility as LLMs get increasingly high scores, despite not yet reaching expert performance in these domains. We introduce ARB, a novel benchmark composed of advanced reasoning problems in multiple fields. ARB presents a more challenging test than prior benchmarks, featuring problems in mathematics, physics, biology, chemistry, and law. As a subset of ARB, we introduce a challenging set of math and physics problems which require advanced symbolic reasoning and domain knowledge. We evaluate recent models such as GPT-4 and Claude on ARB and demonstrate that current models score well below 50% on more demanding tasks. In order to improve both automatic and assisted evaluation capabilities, we introduce a rubric-based evaluation approach, allowing GPT-4 to score its own intermediate reasoning steps. Further, we conduct a human evaluation of the symbolic subset of ARB, finding promising agreement between annotators and GPT-4 rubric evaluation scores.
|
2307.13692#0
|
2307.13692#2
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#2
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# Introduction In recent years, models such as GPT-3 [Brown et al., 2020], GPT-4 [OpenAI, 2023], PaLM [Chowdh- ery et al., 2022], and Chinchilla [Hoffmann et al., 2022] have shown increasing performance across a wide variety of natural language tasks ranging from translation to reasoning [Bubeck et al., 2023, Laskar et al., 2023]. This rapid progress has been closely tracked and assessed by evaluating LLMs on benchmarks, which test model capabilities on a set of standardized problems. The GLUE benchmark [Wang et al., 2019b] for language understanding was first released in April 2018; but models such as BERT [Devlin et al., 2019] and GPT-2 [Radford et al., 2019] in the following year were already powerful enough to necessitate the â SuperGLUEâ benchmark [Wang et al., 2019a]. Since then, the race between language models and benchmarks has increasingly favored the former. Scaling up, model sizes and datasets alike, has led to rapid improvements on various natural language tasks on benchmarks like BIG-bench [Srivastava et al., 2022] and HELM [Liang et al., 2022]. Neural scaling laws [Kaplan et al., 2020, Caballero et al., 2023, Alabdulmohsin et al., 2022] have been used to predict the behavior of large scale models on various metrics. Nevertheless, LLM performance often increases unpredictably [Wei et al., 2022a], especially on tasks that require reasoning abilities. Predictions of performance on ML benchmarks often underestimate the rate of progress [Steinhardt, 2022]. Since progress has been faster than anticipated, new benchmarks need to be more difficult.
|
2307.13692#1
|
2307.13692#3
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#3
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Email: [email protected]. â ¡â ¡Email: [email protected]. Models such as ChatGPT have shown the ability to pass entry-level examinations in fields such as law [Bommarito II and Katz, 2022], medicine [Kung et al., 2023], economics [Caplan, 2023], and mathematics [Shakarian et al., 2023]. Nevertheless, LLM understanding of many fields is reportedly shallow and unreliable [Shapira et al., 2023]. Expert reasoning in domains with specialized knowledge is essential for automated systems to augment skilled professionals [Noy and Zhang, 2023]. In this paper, we introduce a new benchmark dataset, ARB (Advanced Reasoning Benchmark), designed to evaluate expert reasoning abilities in mathematics, physics, chemistry, biology, and law. To make the benchmark more challenging than previous benchmarks, we extract graduate-level tasks from resources intended for domain professionals. The performance of current models such as GPT-4 on the quantitative parts of ARB is very low using standard prompting methods. Our dataset offers improvements over existing benchmarks:
|
2307.13692#2
|
2307.13692#4
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#4
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â ¢ Hundreds of problems requiring expert reasoning in quantitative subjects, where LLMs are known to underperform; â ¢ A large percentage of the problems are short-answer and open response questions, in contrast to the multiple-choice questions that dominated earlier benchmarks. In addition, we propose an automated rubric-based method allowing self-evaluation of intermediate reasoning steps. While not currently a substitute for human evaluation, rubrics generated by GPT-4 have good coverage, and self-evaluation scores track human grading surprisingly well. We provide the instructions to access the dataset in the supplementary material. # 2 Related Work Improving the reasoning capabilities of LLMs has been a subject of recent interest, with a particular focus on advanced prompting techniques [Wei et al., 2022b, Kojima et al., 2023, Wang et al., 2023, Yao et al., 2023, Nye et al., 2021]. Such techniques have seen increasingly successful applications in solving reasoning problems involving commonsense reasoning and mathematics, by promoting active reasoning processes within the LLMs before yielding final answers. Model architectures such as Minerva [Lewkowycz et al., 2022] have exemplified the enhancement of reasoning capabilities through fine-tuning on extensive datasets covering math and reasoning tasks. This has yielded improved performance across several benchmarks, including MATH [Hendrycks et al., 2021], GSM8K [Cobbe et al., 2021], and MMLU [Hendrycks et al., 2020]. Concurrently, other lines of research [Li et al., 2023, Lightman et al., 2023, Cobbe et al., 2021] have investigated the application of verification techniques to augment and enhance LLM performance.
|
2307.13692#3
|
2307.13692#5
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#5
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Most of the aforementioned work has typically evaluated techniques against math benchmarks (e.g., GSM8K [Cobbe et al., 2021], MATH [Hendrycks et al., 2021], SVAMP [Patel et al., 2021], ASDiv [Miao et al., 2020], AQuA [Ling et al., 2017], MAWPS [Koncel-Kedziorski et al., 2016], MultiArith [Roy and Roth, 2016]) and commonsense reasoning tasks (e.g., CSQA [Talmor et al., 2018], StrategyQA [Geva et al., 2021], HotpotQA [Yang et al., 2018]). Recently, several new benchmarks have been introduced for reasoning and planning tasks, such as the GPT-Planning Benchmark [Valmeekam et al., 2023], ALERT Reasoning Benchmark [Yu et al., 2022], JEEBench [Arora et al., 2023]), and [Gendron et al., 2023]. Additionally, comprehensive evaluation suites like the Chain-of-Thought Hub [Fu et al., 2023] have been proposed. Despite their utility, existing benchmarks are limited in difficulty, represent a restricted range of reasoning challenges, and do not necessarily mirror real-world tasks demanding complex reasoning. Moreover, recent advancements such as Minerva [Lewkowycz et al., 2022] have revealed that these benchmarks may not offer sufficient challenge. The rapid progress in LLM capabilities has led many to explore using LLMs in the LLM evaluation pipeline. Apart from using LLMs to generate evaluation tasks [Zhang et al., 2022, Perez et al., 2022], LLMs have increasingly been used as a proxy for human evaluation [Chiang and Lee, 2023, Liu et al., 2023, Fu et al., 2023, Kocmi and Federmann, 2023]. Useful LLM-based evaluation for alignment has been done using rubrics [Bai et al., 2022]. We explore the efficacy of rubrics for evaluation when applied to highly complex math and physics problems.
|
2307.13692#4
|
2307.13692#6
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#6
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
2 # 3 Benchmark The key considerations when building a machine learning benchmark are: â ¢ Difficulty. Most tasks have to be out of reach of current models; a benchmark where many models score over 95% is not useful for tracking differential AI development. â ¢ Usefulness. The tested skills should correlate with generally useful human skills. â ¢ Ease of evaluation. It should be straightforward for the model creators to compare the performances of different models. The scores should be interpretable. â ¢ Minimizing data contamination. A consistent issue with popular benchmarks is that the recent LLMs contain some tasks in their training data [OpenAI, 2023]. This leads to overestimation of true model capabilities.
|
2307.13692#5
|
2307.13692#7
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#7
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â ¢ Connection to general capabilities. If a model is trained on data similar to the benchmark, it is possible it achieves high performance without generalization or â intelligenceâ , failing to solve novel tasks of similar difficulty [Chollet, 2019]. Conversely, problems should not be pathological or overly adversarial, to avoid the dangers of underclaiming [Bowman, 2021]. # 3.1 Formatting The benchmark consists of three types of questions: multiple choice, short answer, and open response, in descending order of proportion in the dataset. â ¢ Multiple choice questions consist of a question and four to five possible answers, and the correct answer is the one that best answers the question. They were sourced from standardized tests, such as the MCAT and bar exam prep, and make up a large proportion of the dataset due to their ease of grading. Short answer questions, on the other hand, ask for final answers in the format of a short phrase or mathematical expression. They were sourced from problem books such as Souza and Silva [2008], Gelca and Andreescu [2017], and physics book series Lim and Qiang [2001], Lim [2007], Lim [1998], Lim et al. [2019], and Lim [1996]. We generally avoided algebraic expressions, because of technical difficulties in the grading process. A given algebraic expression may have several equivalent forms (e.g. nontrivial functional relations for the functions appearing in the final answer), and a grading scheme which accounts for all possible variations across our entire dataset is not feasible. Moreover, physics problems often require answers introducing new notation that is not explicitly mentioned in the problem statement.
|
2307.13692#6
|
2307.13692#8
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#8
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â ¢ Open response questions are more challenging: they consist of a question and a blank space for the answer. They were sourced from problem books and exams, such as the Harvard PhD comprehensive exams in mathematics [Harvard University, 2021]. Such tasks require manual grading. These questions are aspirational in nature, as current systems (e.g. ChatGPT) cannot produce satisfactory responses, even for the â elementaryâ problems. # 3.2 Mathematics This part of the dataset is the most diverse. It includes contest mathematics problems as well as â university mathematicsâ (i.e. mathematics traditionally taught in universities at the undergraduate and beginning graduate level). The contest problems are sourced from Gelca and Andreescu [2017] and Brayman and Kukush [2018], and the university mathematics problems are sourced from Souza and Silva [2008] and Harvard University [2021]. The dataset does not include high school contest problems because those are already present in other well-known benchmarks [Hendrycks et al., 2021]. The Putnam and Brayman books both contain official solutions, which we also include in the dataset. This can be useful for fully automating the grading process, which we leave to future work. For university mathematics, we pick Souza and Silva [2008] for its large selection of â standardâ undergraduate mathematics problems, as well as many problems suitable for the short answer portions. We also select Harvard University [2021] because it covers topics that other collections of exams rarely not cover, such as representation theory of finite groups and algebraic topology.
|
2307.13692#7
|
2307.13692#9
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#9
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
3 Table 1: Types of problems in the benchmark by subject area. Number Subject 52 34 19 Answer Type Numerical Symbolic Proof-like Mathematics Physics Numerical Numerical (w/ image) Symbolic Symbolic (w/ image) 80 18 18 13 Law Multiple Choice 627 MCAT (Reading) Multiple Choice 165 MCAT (Science) Multiple Choice Multiple Choice (w/ image) 144 37 # 3.3 Physics The physics problems are structured similarly as the math problems. The main difference is that some physics problems contain figures, and there are more problems with numerical answers. The problems were sourced from the Major American Universities PhD Qualifying Questions and Solutions series [Zhongguo-Kexue-Jishu-Daxue, 1990]. # 3.4 MCAT The MCAT test contains multiple choice problems testing biology, psychology, chemistry, physics, and reading comprehension. The MCAT problems are sampled from the third edition of McGraw- Hill Education 3 MCAT Practice Tests [Campbell et al., 2017] and cover both science and reading questions. This book was chosen as very few of these problems appear in standard web-searchable sources, limiting contamination. As in the previous categories, we pick problems which are self- contained. Because some MCAT science questions are accompanied by images, we accompany such questions with corresponding image files. # 3.5 Law Applying law involves the application logical reasoning, in addition to grasping legal knowledge. This makes assessments of legal skills an especially attractive type of language model benchmark, where we are attempting to assess the reasoning and intelligence of these models. Furthermore, if the models better understand law, they can be more reliable and ultimately more useful in real-world applications, potentially even increasing the efficiency and transparency of governments more broadly. Most lawyers in the U.S. go to law school, graduate, then study for the Bar Examination, and then must pass the bar before going on to practice law professionally. To evaluate legal understanding of the models, we use an older Bar Examination practice set that, to the best of our knowledge, is not available online in a way that could have led to its inclusion in training data for the language models that we are assessing. The practice bar exam we administer to the various language models covers most major areas of law and therefore it tests legal reasoning and broad U.S. legal knowledge. # 4 Evaluation
|
2307.13692#8
|
2307.13692#10
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#10
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
We evaluate current LLMs on all text-only problems in our dataset. Other LLM benchmark papers do not evaluate on multimodal tasks due to the lack of good multimodal models; we follow suit. Given public communications about GPT-4 [OpenAI, 2023] and Gemini [Ghahramani, 2023], it is likely the physics and MCAT image problems will be useful for testing multimodal LLMs soon. 4 Models We evaluate ChatGPT (gpt3.5-turbo-0301), GPT 3.5 (text-davinci-003), GPT-4 with 8k context length (gpt-4-0314), and Claude (claude-v1.3-100k). We evaluate all question types using task-specific instructions and chain of thought. In chat models, we put the instructions as the system prompt; otherwise we put them at the beginning of the prompt. In all problem types, in order to extract the modelâ s final answer, we instruct the model to write its final answer at the end of the response after the delimiter ANSWER: . We then parse the model generated final answer as the remaining text after the delimiter. The response is marked as incorrect if the delimiter is not found. Due to the differences in evaluation for multiple choice versus open-ended responses, we adopt multiple evaluation procedures. Multiple choice To evaluate multiple choice questions, we can simply compare the extracted final answer to the ground truth. A response is considered correct if the extracted choice matches the ground truth choice. With appropriate prompting, all models output a parsable answer > 97% of the time. We conduct a separate manual evaluation on a sampled subset of the questions to check that our parsing procedure is not mischaracterizing the true performance of the model. Numerical To evaluate problems with a numerical final answer, we first extract the delimited model answer as above. In the physics problems, many answers are in units; we prompt the model with information about the unit, and instruct it to fully simplify its answer and omit any units. However, sometimes the model forgets to do either or both, and so we apply a series of regexes to remove units. We then attempt to parse the result into a mathematical expression using Pythonâ s SymPy library [Meurer et al., 2017]. If this parsing fails, the answer is marked as incorrect.
|
2307.13692#9
|
2307.13692#11
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#11
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Once parsed, we score a the model answer as correct if |model_answerâ ground_truth| Symbolic Problems with symbolic answers are less structured and harder to parse. To do so, we again leverage SymPy, first normalizing expressions to contain a default set of variable names and then checking for equivalence up to a permutation of the variables. However this approach is error-prone and only works for the subset of symbolic responses in a function form. More advanced responses, such as those containing set notation, require human evaluation. Proof-like Natural language proofs cannot be evaluated automatically; the authors with training in mathematics grade the proofs. Further manual human evaluation requires a thorough inspection of the intermediate reasoning steps. This makes evaluation expensive in practice. Model-based evaluation To address the difficulties in developing automated metrics for evaluating more advanced problems, we experiment with two model based approaches. First, we prompt ChatGPT to grade the equivalence of two symbolic expressions with score options 0 when the totally incorrect, 0.5 when the symbolic expressions are nearly the same e.g. equivalent up to a constant, and 1 when they are an exact match. Our prompting strategy can be found in the supplementary material. More generally, we evaluate the capabilities of GPT-4 to grade intermediate reasoning chains via a rubric-based evaluation approach. For symbolic and proof-like problems, we few-shot prompt GPT-4 to create a 10-point rubric. This is done by handwriting a small set of initial rubrics for proof-like problems and prompting the model with these examples and the ground truth reference solution. The model assigns point values to intermediate steps using the reference solution as a guide. This process is illustrated in the supplementary material. With model generated rubrics in hand, we then evaluate each question against its rubric. This is done by again prompting GPT-4 to go step by step through the model answer and assign partial credit based on the rubric. This provides a denser automatic evaluation metric on increasingly unstructured answers. As a nice side benefit, it makes human evaluation of complex symbolic questions much easier, significantly reducing the amount of time required per question. # 4.1 Results We now evaluate gpt-4, gpt-3.5-turbo, text-davinci-003, and claude-v1.3 on ARB. The results for the mechanically scored subjects are in Figure 1. 5
|
2307.13692#10
|
2307.13692#12
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#12
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Mmm claude-v1.3 ~ 084 Gill text-davinci-003 3 Ml gpt-3.5-turbo LS Mmm gpt-4 > VY 064 oO â _ =| iv) y) <q 044 ra U oO a 5 (0.24 wn i 0.0 â . | % % % % %, i > Sp â yy %s % % & % % 2 Y% & © e My, %, %G % &. e Kos ns ey y Figure 1:
|
2307.13692#11
|
2307.13692#13
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#13
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Accuracy of models over automatically scored components of the ARB benchmark. Numerical questions are evaluated with a relative error threshold of 10â 2. We see models generally do quite well on the multiple choice Law and MCAT subsets, but struggle significantly on questions with numerical final answers. GPT-4 is the only model capable of reliably simplifying complex expressions, but even GPT-4 struggles to reliably perform arithmetic and symbolic manipulations over long contexts. On the multiple-choice questions, the only model that cannot reliably follow the answer formatting instructions is gpt-3.5-turbo. This happens for a variety of reasons, including the model refusing to answer or to commit to a single answer choice. On the Law benchmark, gpt-3.5-turbo does not output a parsable answer around 25% of the time. The other models exhibit this failure in less than 5% of multiple-choice questions, with GPT-4 being correctly parsed over 99% of the time. We see a similarly low performance profile across models on symbolic problems, reported in Table 2. Table 2: Manually parsed scores for symbolic answer questions. # Math Symbolic 18% 12% 3% 3% # Physics Symbolic 28% 6% 6% 11% Math Symbolic â Physics Symbolic gpt-4-0314 gpt-3.5-turbo-0301 text-davinci-003 claude-v1.3-100k
|
2307.13692#12
|
2307.13692#14
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#14
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# 4.2 What Kind of Errors Do LLMs Make? The GPT-4 evaluation paper [Bubeck et al., 2023] classified errors GPT-4 makes in single-pass evaluation on GSM8K [Cobbe et al., 2021] and MATH [Hendrycks et al., 2021] into three types: arithmetic mistakes, misunderstood statement, and wrong approach. We make a more fine-grained analysis and extend it to math and physics problems in our dataset. The results are in Table 3. The errors current LLMs make on the Mathematics part of ARB fall into five general types: Misunderstanding / answering only a part of the question / misread problem; â ¢ Wrong approach: the modelâ s early chain of thought does not guess the right approach;
|
2307.13692#13
|
2307.13692#15
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#15
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
6 Table 3: Mistakes on mathematics and physics problems in ARB, GPT-4. Logical error Arithmetic Correct answer 3% 16% n/a 6% 28% Misread Wrong problem approach 25% 50% 50% 80% 37% or hallucination 88% 29% 72% 53% 68% mistake 48% 4% 16% 6% 31% 0% 16% 5% 0% 0% Correct reasoning 3% 16% 5% 6% 12%
|
2307.13692#14
|
2307.13692#16
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#16
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
â ¢ Logical errors: the model uses a false implication between two statements; â ¢ Hallucinating facts or theorems: the model confabulates a statement that is false in general, or not applicable in context; â ¢ Arithmetic/calculation error: the model multiplies incorrectly, omits a term in an expression, gives a wrong numerical value for a fraction, and other similar mistakes. We grade GPT-4 using the above as a guideline. Our grading of the modelâ s CoT answers is not mutually exclusive; if the model both uses an approach that doesnâ t go anywhere and makes a calculation error in it, we count it towards both categories. Note that the errors might not be independent: arithmetic mistakes could be more or less frequent in wrong approach solutions as opposed to the solutions with correct idea. We notice that the model is likely to make incorrect simplifications to get to some final answer in approaches that cannot work; this is expected, as prompting the model to produce a solution with a final answer often leads it to produce some final answer by any means. When the model outputs a chain of implications, it is not always clear whether some false statement is due to a logical error, or it is a straight-out confabulation. We merge those two error types in Table 3. Some problems ask for multiple things to be proven or calculated. Our graders gave the model a score of 0.5 if it correctly derived at least half of the "subproblems" (for example, homology groups of a given manifold). With this more benevolent form of grading, the performance of GPT-4 on the Proof-like problems jumps to 16%. Where applicable, slight discrepancy with automatic evaluation is also possible due to the error tolerance. We note that many of the problems in Physics Symbolic have correct symbolic answers even when there are flaws in the chain of thought reasoning of GPT-4. This is likely due to some kind of memorization, although not necessarily from the same sources: see Table 12 for an example. It is possible that our graders underestimate the rate of arithmetic mistakes in some cases, especially when the approach is clearly wrong, or it is not clear whether a given error is due to faulty reasoning or due to a missed term in the calculations.
|
2307.13692#15
|
2307.13692#17
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#17
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
For the larger subsets (see Table 1), we subsample the problems to between 20 and 40 per subject area; this is enough for a ballpark estimate of the frequency of different errors, and is not worth increasing because attributing error types is inherently fuzzy. # 5 Model-based Rubric Evaluation As reasoning tasks increase in complexity, it gets harder to evaluate model performance. Symbolic final answers are in some cases difficult to grade automatically. Further, we are often more interested in the correctness of the reasoning used to produce the final answer; but evaluating intermediate reasoning steps requires expert human supervision. An ideal solution would be to use LLMs as evaluators based on a reference solution; unfortunately, there are major reliability issues. To improve reliability, we propose generating rubrics as an important component of the evaluation process. The model generates the rubric from the reference solution, then evaluates any solution based on the generated rubric. To aid rubric generation, we give few-shot examples of human-written rubrics to the rubric-generating model run. We study this approach by conducting a human evaluation of GPT-4 generated rubrics and the GPT-4 grading of its own solutions using the generated rubrics.
|
2307.13692#16
|
2307.13692#18
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#18
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
7 We rate the quality of GPT-4 generated rubrics by hand in the first two rows of Table 4. Likert scores from 1-5 are assigned to both the coverage of the rubric, i.e. how well it captures key subproblems, and the point breakdown. Rubric quality scores are reported in Table 5 for symbolic and proof-like problems. We find GPT-4 designs rubrics which cover the crucial solution steps well, but struggles to properly allocate points to each step based on relative importance. However, it is much better than GPT-3.5-turbo, which tends to over-allocate points to only one or two solution steps. Table 4: Evaluations of rubric quality and GPT-4 rubric evaluation failure cases. Rubric coverage and rubric point spread are on a 1-5 Likert scale. Alternative solutions is the percentage of correct solutions found not covered by the rubric. Extra/reduced credit track how often GPT-4 erroneously assigns or deducts points. Hallucinated rubric tracks how often GPT-4 assigns points by referring to a rubric item not actually present in the rubric.
|
2307.13692#17
|
2307.13692#19
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#19
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Physics Symbolic Math Symbolic Proof-like Rubric coverage Rubric point spread Alternative solutions Extra credit Reduced credit Hallucinated rubric 4.42 4.16 5% 27% 11% 0% The obvious limitation of rubric scoring is the case of correct solutions not covered by the rubric. We find that on our benchmark, GPT-4 rarely generates a fully or even mostly partially correct solution that does not follow the rubric. Once done rating the model generated rubrics, we then manually grade GPT-4â s solutions according to each rubric and compare the result to GPT-4â s evaluation. We also annotate, for each problem, both whether GPT-4 assigns credit inappropriately or fails to assign credit when it should. Table 5: Average scores (out of 10 points) when assigned by human annotators versus GPT-4. Correlation is the Pearson correlation coefficient between the two scores, over all problems. # Physics Symbolic Math Symbolic 3.13 3.37 0.78 Proof-like 2.65 3.8 0.82 Physics Symbolic Math Symbolic _ Proof-like Human eval score Model eval score Correlation 5.00 5.05 0.91 We find a moderately high correlation between GPT-4â s evaluation score and the manual score. In some cases, the model, assigns an extra point or two when compared to the annotated rubric score. However, the self-eval score almost never deviates more than two points from the ground truth. The main failure mode we detect is the assignment of partial credit to attempted solutions completely outside the problem rubric, where the human evaluation score is always zero. Taken together, we believe these results suggest that rubric-based evaluation is a promising automated evaluation method. Having established rubric-based evaluation as a (imperfect) proxy for correctness, we now comment on the GPT-4 performance graded by the rubric. Table 5 shows GPT-4 is best at generating correct intermediate reasoning steps for physics questions. Inspecting the model outputs suggests that GPT-4 is good at recalling relevant and useful concepts in physics for solving the relevant problem; however, it can struggle with the mathematical manipulations required to solve the problem. The model is worse at recognizing the correct concepts and formulating an appropriate plan for the math questions, particularly for proof-like problems.
|
2307.13692#18
|
2307.13692#20
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#20
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# 6 Limitations and Conclusion In this paper, we presented ARB, a novel benchmark for evaluating advanced reasoning capabilities in large language models. Our dataset is composed of various problems from the sciences and law, 8 sourced from graduate-level exams and professional resources. Despite advancements in current LLMs, their performance remains very low on the quantitative subjects, in ARBâ s tasks. We also proposed a rubric-based self-evaluation method, enabling LLMs to grade their own reasoning. This method is not yet reliable enough to replace human grading. We hope this method can be extended to more reliable and cheap testing of complex model outputs. As with all other benchmarks that are not created anew and kept secret, it is possible there is data contamination. For example, the MCAT books are not available for free in most jurisdictions, but it certainly possible that some model creators have trained on it anyway. Finally, the benchmark does not remotely cover all aspects of human ability; a model solving this benchmark perfectly could still be much worse than most educated people in many aspects. Nevertheless, we hope that increasing the difficulty standards helps the research community ground the performance of increasingly powerful models more accurately.
|
2307.13692#19
|
2307.13692#21
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#21
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
# Acknowledgments and Disclosure of Funding We thank Jeffrey Deng for developing and documenting the API, and building the project website. We would also like to thank Raunak Chowdhuri for helpful comments, and Zhangir Azerbayev for useful discussions early on in the project. TS is supported by NSF grant 1745583. # References Ibrahim M Alabdulmohsin, Behnam Neyshabur, and Xiaohua Zhai. Revisiting neural scaling laws in language and vision. Advances in Neural Information Processing Systems, 35:22300â 22312, 2022.
|
2307.13692#20
|
2307.13692#22
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#22
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Daman Arora, Himanshu Gaurav Singh, and Mausam. Have LLMs advanced enough? A challenging problem solving benchmark for Large Language Models, 2023. Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain, Deep Ganguli, Dustin Li, Eli Tran-Johnson, Ethan Perez, Jamie Kerr, Jared Mueller, Jeffrey Ladish, Joshua Landau, Kamal Ndousse, Kamile Lukosuite, Liane Lovitt, Michael Sellitto, Nelson Elhage, Nicholas Schiefer, Noemi Mercado, Nova DasSarma, Robert Lasenby, Robin Larson, Sam Ringer, Scott Johnston, Shauna Kravec, Sheer El Showk, Stanislav Fort, Tamera Lanham, Timothy Telleen-Lawton, Tom Conerly, Tom Henighan, Tristan Hume, Samuel R. Bowman, Zac Hatfield-Dodds, Ben Mann, Dario Amodei, Nicholas Joseph, Sam McCandlish, Tom Brown, and Jared Kaplan.
|
2307.13692#21
|
2307.13692#23
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#23
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Constitutional AI: Harmlessness from AI, feedback, 2022. Barbri. Barbri Practice Questions: Multistate Testing Practice Questions. Thomson/Bar/Bri, 2007. ISBN 9780314174017. Michael Bommarito II and Daniel Martin Katz. GPT takes the bar exam. arXiv preprint arXiv:2212.14402, 2022. Samuel R. Bowman. The dangers of underclaiming: Reasons for caution when reporting how NLP systems fail, 2021.
|
2307.13692#22
|
2307.13692#24
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#24
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Volodymyr Brayman and A. G. Kukush. Undergraduate Mathematics Competitions (1995-2016): Taras Shevchenko National University of Kyiv. Springer, 2018. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei.
|
2307.13692#23
|
2307.13692#25
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#25
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Language models are few-shot learners. CoRR, abs/2005.14165, 2020. URL https://arxiv.org/abs/2005.14165. Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. Sparks of artificial general intelligence: Early experiments with GPT-4. arXiv preprint arXiv:2303.12712, 2023. Ethan Caballero, Kshitij Gupta, Irina Rish, and David Krueger. Broken neural scaling laws, 2023. 9
|
2307.13692#24
|
2307.13692#26
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#26
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Candice McCloskey Campbell, Shaun Murphree, Jennifer M. Warner, Amy B. Wachholz, Kathy A. Zahler, and George J. Hademenos. McGraw-Hill Education 3 MCAT Practice Tests, Third Edition. McGraw-Hill Education, Jan 2017. ISBN 1259859622. Bryan Caplan. GPT retakes my midterm and gets an A, 2023. URL https://betonit.substack.com/ p/gpt-retakes-my-midterm-and-gets-an.
|
2307.13692#25
|
2307.13692#27
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#27
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Cheng-Han Chiang and Hung-yi Lee. Can Large Language Models be an alternative to human evaluations? arXiv e-prints, art. arXiv:2305.01937, may 2023. doi: 10.48550/arXiv.2305.01937. François Chollet. On the measure of intelligence. arXiv preprint arXiv:1911.01547, 2019. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prab- hakaran, Emily Reif, Nan Du, Ben Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ippolito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanumalayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Diaz, Orhan Firat, Michele Catasta, Jason Wei, Kathy Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. PaLM: Scaling language modeling with Pathways, 2022. URL https://arxiv.org/abs/2204.02311. Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems.
|
2307.13692#26
|
2307.13692#28
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#28
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
CoRR, abs/2110.14168, 2021. URL https://arxiv.org/abs/2110.14168. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171â 4186, Minneapolis, Minnesota, jun 2019. Association for Computational Linguistics. doi: 10.18653/v1/N19-1423. URL https://aclanthology.org/N19-1423. Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu.
|
2307.13692#27
|
2307.13692#29
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#29
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
GPTScore: Evaluate as you desire. arXiv e-prints, art. arXiv:2302.04166, feb 2023. doi: 10.48550/arXiv.2302.04166. Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, and Tushar Khot. Chain-of-thought hub: A continuous effort to measure large language modelsâ reasoning performance, 2023. RË azvan Gelca and Titu Andreescu. Putnam and beyond. Springer, 2017. Gaël Gendron, Qiming Bao, Michael Witbrock, and Gillian Dobbie. Large language models are not abstract reasoners, 2023. Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies, 2021. Zoubin Ghahramani. Introducing PaLM 2, 2023. URL https://blog.google/technology/ai/ google-palm-2-ai-large-language-model. Department of Mathematics Harvard University. Qualifying examination for fall 2021, Aug 2021. URL https://www.math.harvard.edu/media/quals-F21_with_solutions.pdf.
|
2307.13692#28
|
2307.13692#30
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#30
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and Jacob Steinhardt. Measuring massive multitask language understanding, 2020. URL https://arxiv.org/abs/2009. 03300. Dan Hendrycks, Collin Burns, Saurav Kadavath, Akul Arora, Steven Basart, Eric Tang, Dawn Song, and Jacob Steinhardt. Measuring mathematical problem solving with the MATH dataset. CoRR, abs/2103.03874, 2021. URL https://arxiv.org/abs/2103.03874.
|
2307.13692#29
|
2307.13692#31
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#31
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. Training compute-optimal large language models, 2022. URL https://arxiv.org/abs/2203.15556.
|
2307.13692#30
|
2307.13692#32
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#32
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
10 Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models, 2020. Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. GPT-4 passes the bar exam. Available at SSRN 4389233, 2023. Tom Kocmi and Christian Federmann. Large language models are state-of-the-art evaluators of translation quality. arXiv e-prints, art. arXiv:2302.14520, feb 2023. doi: 10.48550/arXiv.2302.14520. Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners, 2023. Rik Koncel-Kedziorski, Subhro Roy, Aida Amini, Nate Kushman, and Hannaneh Hajishirzi.
|
2307.13692#31
|
2307.13692#33
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#33
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
MAWPS: A math word problem repository. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1152â 1157, San Diego, California, jun 2016. Association for Computational Linguistics. doi: 10.18653/v1/N16-1136. URL https://aclanthology.org/N16-1136. Tiffany H Kung, Morgan Cheatham, Arielle Medenilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, et al.
|
2307.13692#32
|
2307.13692#34
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#34
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digital Health, 2 (2):e0000198, 2023. Md Tahmid Rahman Laskar, M Saiful Bari, Mizanur Rahman, Md Amran Hossen Bhuiyan, Shafiq Joty, and Jimmy Xiangji Huang. A systematic study and comprehensive evaluation of ChatGPT on benchmark datasets, 2023. Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. Solving quantitative reasoning problems with language models, 2022. URL https://arxiv.org/abs/2206.14858. Yifei Li, Zeqi Lin, Shizhuo Zhang, Qiang Fu, Bei Chen, Jian-Guang Lou, and Weizhu Chen. Making large language models better reasoners with step-aware verifier, 2023. Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, Benjamin Newman, Binhang Yuan, Bobby Yan, Ce Zhang, Christian Cosgrove, Christopher D. Manning, Christopher Ré, Diana Acosta-Navas, Drew A.
|
2307.13692#33
|
2307.13692#35
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#35
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Hudson, Eric Zelikman, Esin Durmus, Faisal Ladhak, Frieda Rong, Hongyu Ren, Huaxiu Yao, Jue Wang, Keshav Santhanam, Laurel Orr, Lucia Zheng, Mert Yuksekgonul, Mirac Suzgun, Nathan Kim, Neel Guha, Niladri Chatterji, Omar Khattab, Peter Henderson, Qian Huang, Ryan Chi, Sang Michael Xie, Shibani Santurkar, Surya Ganguli, Tatsunori Hashimoto, Thomas Icard, Tianyi Zhang, Vishrav Chaudhary, William Wang, Xuechen Li, Yifan Mai, Yuhui Zhang, and Yuta Koreeda. Holistic evaluation of language models, 2022. Hunter Lightman, Vineet Kosaraju, Yura Burda, Harri Edwards, Bowen Baker, Teddy Lee, Jan Leike, John Schulman, Ilya Sutskever, and Karl Cobbe.
|
2307.13692#34
|
2307.13692#36
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#36
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Letâ s verify step by step, 2023. Swee Cheng Lim, Choy Heng Lai, and Leong Chuan Kwek. Problems and solutions on optics. World Scientific, 2019. Yung-kuo Lim. Problems and solutions on thermodynamics and Statistical Mechanics. World Scientific, 1996. Yung-kuo Lim. Problems and solutions in quantum mechanics: Major, American universities ph. D. qualifying questions and, solutions. World Scientific, 1998. Yung-kuo Lim.
|
2307.13692#35
|
2307.13692#37
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#37
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Problems and solutions on electromagnetism. World Scientific Pub. Co, 2007. Yung-kuo Lim and Yuan-qi Qiang. Problems and solutions on Mechanics. World Scientif., 2001. Wang Ling, Dani Yogatama, Chris Dyer, and Phil Blunsom. Program induction by rationale generation: In Proceedings of the 55th Annual Meeting Learning, to solve and explain algebraic word problems. of the Association for Computational Linguistics (Volume 1: Long Papers), pages 158â
|
2307.13692#36
|
2307.13692#38
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#38
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
167, Vancouver, Canada, jul 2017. Association for Computational Linguistics. doi: 10.18653/v1/P17-1015. URL https: //aclanthology.org/P17-1015. Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. G-eval: NLG evaluation using GPT-4 with better human alignment. arXiv e-prints, art. arXiv:2303.16634, mar 2023. doi: 10.48550/ arXiv.2303.16634.
|
2307.13692#37
|
2307.13692#39
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#39
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
11 Aaron Meurer, Christopher P. Smith, Mateusz Paprocki, OndË rej Ë Certà k, Sergey B. Kirpichev, Matthew Rocklin, AMiT Kumar, Sergiu Ivanov, Jason K. Moore, Sartaj Singh, Thilina Rathnayake, Sean Vig, Brian E. Granger, Richard P. Muller, Francesco Bonazzi, Harsh Gupta, Shivam Vats, Fredrik Johansson, Fabian Pedregosa, Matthew J. Curry, Andy R. Terrel, Å tË epán RouË
|
2307.13692#38
|
2307.13692#40
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#40
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
cka, Ashutosh Saboo, Isuru Fernando, Sumith Kulal, Robert Cimrman, and Anthony Scopatz. Sympy: Symbolic computing in python. PeerJ Computer Science, 3: e103, jan 2017. ISSN 2376-5992. doi: 10.7717/peerj-cs.103. URL https://doi.org/10.7717/ peerj-cs.103. Shen-yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing English math word problem solvers. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 975â 984, Online, jul 2020. Association for Computational Linguistics. doi: 10.18653/v1/ 2020.acl-main.92. URL https://aclanthology.org/2020.acl-main.92.
|
2307.13692#39
|
2307.13692#41
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#41
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Shakked Noy and Whitney Zhang. Experimental evidence on the productivity effects of generative artificial intelligence. Available at SSRN 4375283, 2023. Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. Show your work: Scratchpads for intermediate computation with language models, 2021. OpenAI. GPT-4 technical report, 2023. Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are NLP models really able to solve simple math word problems?, 2021.
|
2307.13692#40
|
2307.13692#42
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#42
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Ethan Perez, Sam Ringer, KamilË e LukoÅ¡i¯utË e, Karina Nguyen, Edwin Chen, Scott Heiner, Craig Pettit, Catherine Olsson, Sandipan Kundu, Saurav Kadavath, et al. Discovering language model behaviors with model-written evaluations. arXiv preprint arXiv:2212.09251, 2022. Mahima Pushkarna, Andrew Zaldivar, and Oddur Kjartansson. Data Cards: Purposeful and Transparent Dataset Documentation for Responsible AI. arXiv e-prints, art. arXiv:2204.01075, April 2022. doi: 10.48550/arXiv. 2204.01075. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
|
2307.13692#41
|
2307.13692#43
|
2307.13692
|
[
"2212.14402"
] |
2307.13692#43
|
ARB: Advanced Reasoning Benchmark for Large Language Models
|
Subhro Roy and Dan Roth. Solving general arithmetic word problems, 2016. Paulo Shakarian, Abhinav Koyyalamudi, Noel Ngu, and Lakshmivihari Mareedu. An independent evaluation of ChatGPT on mathematical word problems (MWP). arXiv preprint arXiv:2302.13814, 2023. Natalie Shapira, Mosh Levy, Seyed Hossein Alavi, Xuhui Zhou, Yejin Choi, Yoav Goldberg, Maarten Sap, and Vered Shwartz. Clever Hans or Neural Theory of Mind? Stress testing social reasoning in large language models. arXiv preprint arXiv:2305.14763, 2023.
|
2307.13692#42
|
2307.13692#44
|
2307.13692
|
[
"2212.14402"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.