doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2306.06331
| 26 |
Figure 3: ChatGPTâs performance in order question.
Figure 3 illustrates the average number of right responses given by ChatGPT for each question across all years. The data exhibits that the possibility of ChatGPT providing an accurate response reduces as the questionâs level of complexity rises. The ChatGPT correct answer rate is greater than 50% for questions 1 through 35, which are K and C-level questions. The accurate answer rate of ChatGPT, however, decreases below 50% for questions 35 to 50, demonstrating a decline proportional to the pattern of the questions. The graph demonstrates that as question difficulty grows, ChatGPTâs accuracy declines. Given that questions at higher knowledge levels tend to be more complicated and need in-depth comprehension and problem-solving abilities, this pattern is to be expected. The findings imply that the difficulty and complexity of the questions have a significant impact on ChatGPTâs capacity to provide accurate answers. This discovery has significant implications for the design of AI systems for educational applications since it emphasizes the need for more sophisticated and advanced models that are capable of handling difficult and challenging tasks. Additionally, it suggests that more investigation is required to identify the specific factors that influence ChatGPTâs performance on various question types. This understanding can guide the creation of more efficient AI-based educational tools and interventions.
|
2306.06331#26
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 26 |
Physical Demonstrations We demonstrate AutoTAMP on physical differential-drive robots via the remotely-accessible
Robotarium platform [25] for the Overcooked, Rover, Wall, and Chipâs Challenge scenarios. We track the planned tra- jectories using a low-level controller that also includes a control barrier function to prevent collisions between robots. This controller and the underlying nonlinear dynamics in- duce a tracking error; we account for this by padding obstacles at planning time. Obstacles are displayed in the robot workspace using an overhead projector. These physical demos provide evidence that our method can be applied to real-world navigation task and motion planning. They are included as part of supplemental videos.
# VI. RELATED WORK
|
2306.06531#26
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 26 |
Table 3: Commonsense and symbolic reasoning accuracy. For each task, we report the median scores among 5 runs.
Figure 3: Illustration of error analysis of Chain of Thought Prompting across twelve tasks. Each error type is represented by a color. The share in color indicates the share of the error type.
# 4.3 Analysis of Whether Correcting Sub-logics Solves the Majority of Incorrect Rationales
We conduct experiments on twelve datasets to check whether correcting sub-logics solves the majority of incorrect rationales. Each task is represented by a pie chart. For each task, we conduct the error analysis for CoT prompting and analyze the error types of rationales. We divided the error types into four categories: errors that are able to be corrected by the âmodifyingâ operation, the âaddingâ operation, the âdeletingâ operation, and the rest of the errors that are unable to be manually corrected.
6
|
2306.07932#26
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 27 |
The analysis of the modelâs performance in relation to the order of the questions can be beneficial in a number of ways, in addition to determining ChatGPTâs accuracy in responding to the questions. In the first place, it can assist teachers in comprehending how the order of questions impacts ChatGPTâs capacity to solve them and in optimizing
7
9 4
0 5
the question sequence to produce a more useful evaluation. This is crucial because as an exam goes on, students may become cognitively fatigued, which may affect how well they perform on subsequent questions. Teachers can simulate how students could perform under various circumstances and create exams that are better suited to accurately assess their knowledge and abilities by studying ChatGPTâs performance with regard to the configuration of questions. Understanding how the question sequence impacts ChatGPTâs performance can also assist identify possible weak points in the model, which can guide future model improvements.
# 4.3 ChatGPTâs performance in levels and topics
|
2306.06331#27
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 27 |
# VI. RELATED WORK
Task and Motion Planning Planning for robotics involves both high-level, discrete planning of tasks [5] and low- level continuous planning of motions [26]; solving these simultaneously is referred to as task and motion planning [1]. Modern approaches either attempt to satisfy the motion constraints prior to action sequencing [27], [28], [29], find action sequences then satisfy the motion constraints [30], [31], [32], [19], or interleave these steps [33], [34], [35]. For tasks specified in temporal logic, existing methods ei- ther use multi-layer planning [36], like the aforementioned approaches, or direct optimization via a mixed-integer linear program [37], [23] or a non-linear program [38]. Our work focuses on translating natural language to STL, relying on [23] as a TAMP solver, but can be integrated with other STL-based planners.
|
2306.06531#27
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 27 |
6
Accuracy for AddSub with Different Thresholds Accuracy for SingleEq with Different Thresholds. Accuracy for SingleOp with Different Thresholds â 0.92 oe 096 post poe F fon $096 ] 0.90 0.86 0.90 e -e MCS) ogo eo Mcs) 0.96 Se wes om «10% «20% ~â«om «CONC Se 10% «20% «30% «AO «SO oe 10% 20% «om ~«COMSCO âThreshold for Diversity Entropy Threshold for Diversity Entropy Threshold for Diversity Entropy
Figure 4: Results of different thresholds of DE. It shows the results of MCS with 5%, 10%, 20%, 30%, 40% and 50% DE for AddSub (Left), SingleEq (Medium) and SingleOp (Right). Results show that DE-based filtering is an efficient method to rank the possibility to be incorrect for the output of CoT predictions, and samples with incorrect output will be ranked higher than those without.
Figure 5: ROC Curves for DE to filter out the incorrect CoT outputs. It shows the ROC Curve for AddSub (Left), Singleeq (Medium) and SingleOp (Right). The results indicate that DE is a reliable metrics that can determine the samples most likely to be incorrectly predicted for humans to involve.
|
2306.07932#27
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 28 |
# 4.3 ChatGPTâs performance in levels and topics
According to the degree of difficulty, Table 4 shows the percentage of accurate responses using ChatGPT for each year. The average percentage of right answers for K-level questions given by ChatGPT ranged from 90% in 2022 to 75% in 2023. The highest percentage of accurate answers for C-level questions was 75.22% in 2022, and the lowest was 40% in 2023. The highest and lowest percentages of right responses for questions at the A-level were 55.56% and 0%, respectively. For the years 2021, 2022, and 2023, ChatGPT did not offer any accurate responses to H-type questions. The highest percentages for the remaining years were 16.67% and 22.22%. These results show how ChatGPT has performed over time at various levels of difficulty.
Table 4: ChatGPTâs performance in question levels C
K A H 2023 75.00 40.00 25.00 0.00 2022 90.00 72.22 0.00 0.00 2021 81.82 62.50 28.57 0.00 2020 89.47 62.50 55.56 16.67 2019 85.71 58.82 20.00 22.22
75 ) % ( e c n a m r o f r e P 50 25 0 K C A H
|
2306.06331#28
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 28 |
LLMs for TAMP Recent claims about the impressive reasoning capabilities of LLMs [6], [39] have led to interest in such models for task and motion planning. One approach is to directly use LLMs as planners [8], [9], [12], [11], [7], [10], [13]. Initial work showed that zero-shot generation of an action sequence from a high-level task description had relatively poor executability, but few-shot in-context learning, constraining output to admissible actions, and iterative action generation significantly improved performance [8]. Subse- quent efforts grounded the primitive actions to motion control policies, using affordance functions to guide LLM-based task planning [9] and TAMP [12], also adding feedback [11]. Other work focused on how prompting can inform task execution[7], [13]. Despite these successes, however, there is evidence that LLMs perform poorly on more realistic tasks [15], [40], motivating different approaches. While we are interested in LLMs for TAMP, our work does not directly use LLMs as planners.
|
2306.06531#28
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 28 |
The percentage of each type across datasets is shown in Fig. 3. More details are shown in Appendix B.2.
The first three categories constituent the majority of incorrect rationales and can be solved by correcting independent sub-logics instead of the whole rationale. More specifically, CoT often makes mistakes when calculating polynomial calculations with decimal points, which account for a large part of manual correction and can be corrected by the âmodifyingâ operation. For the âaddingâ operation, it functions when CoT often fails to convert the units, for example, from grams to kilograms. CoT often outputs redundant logic, leading to incorrect answers, which could be fixed by the âdeletingâ operation. Except for the error mentioned above, errors that are unable to be manually corrected include misinterpretation of the question, incorrect formula, whole incorrect composition of sub-logics and so on.
|
2306.07932#28
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 29 |
75 ) % ( e c n a m r o f r e P 50 25 0 K C A H
Figure 4: ChatGPTâs performance in question levels for 2019-2023.
In accordance with the questionsâ degree of complexity, Figure 4 depicts ChatGPTâs accuracy from 2019 to 2023. For queries classified as type K, it indicates that ChatGPT attained an accuracy rate ranging from 75% to 90%, with a small standard deviation indicating a high rate of consistency. This demonstrates ChatGPTâs exceptional skill in answering questions that are not too challenging. For questions of type C, the accuracy rate falls to 40-72%, demonstrating that ChatGPT performs less effectively when answering questions of intermediate difficulty. Type A questions show the greatest diversity in ChatGPTâs accuracy rate, with correct answers ranging from 0% to 57% and the highest standard deviation. This shows that ChatGPT performs the least consistently when attempting to answer challenging type-A questions. The accuracy of ChatGPTâs answers to the most difficult type H questions ranges from 0 to 22%, which is a quite low percentage. Based on these findings, it appears that ChatGPT performs better when answering questions that are easier to answer than those that are more complex.
|
2306.06331#29
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 29 |
Translating Language to Task Representations A nat- ural alternative is to rely on dedicated planners by mapping from natural language to a planning representation. There is a rich history of parsing natural language into formal semantic representations [41], [42], [43], of which we only provide a relatively small sampling. The robotics community adopted parsing and other techniques to map language to such representations as lambda calculus [44], [45], motion planning constraints [46], linear temporal logic [47], [48], [49], [50], and signal temporal logic [51], [52], among others [53]. We refer readers to [54] for a more thorough review.
To address challenges of data availability, task generaliza- tion, linguistic complexity, common sense reasoning, and more, recent work has applied LLMs to this translation problem. Modular approaches have used LLMs to extract referring expressions with corresponding logic propositions to then construct a full temporal logic specification [55], [21]. Relying on LLMs for direct translation, other work has mapped from language to PDDL goals [17] or full PDDL problems [56], [16]. Our work similarly translates to a task specification, but we can represent complex constraints (e.g. temporal), and we introduce a novel mechanism for automatic detection and correction of semantic errors. An interesting alternative maps language to code [57], which is highly expressive but does not easily optimize or provide behavior guarantees for long-horizon tasks.
|
2306.06531#29
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 29 |
Additionally, we find that the advantage of Self-consistency often comes from fixing the errors that are unable to be manually corrected. Sampling a large set of rationales and taking a majority vote helps the fix of misinterpretation of the question while making little help in fixing calculation error. On the contrary, MCS is beneficial for other three categories of errors including âmodifyingâ, âaddingâ and âdeletingâ. The difference between Self-consistency and MCS illustrates why MCS + Self-consistency achieves great performance as shown in Tab. 2. Obviously, MCS and Self-consistency play different roles and be mutually complementary.
# 4.4 Additional Study
Validation of Diversity Entropy To validate the effectiveness of Diversity Entropy in determining whether the manual correction is necessary for each sample, we draw a ROC Curve in Fig. 5 to demonstrate its ability to rank the likelihood of incorrect outputs. The selection of the threshold involves a trade-off between performance and human labor. Fig. 4 shows that the performance stabilizes after reaching the threshold of top 20% to top 40% for most datasets. Therefore, we set the threshold to be top 40% across all our experiments. As the manual correction is labor-consuming and time-consuming, Diversity Entropy can help save time and labor by allowing humans to focus on checking only a small percentage.
7
|
2306.07932#29
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 30 |
The percentage of correct responses offered by ChatGPT for different topics from 2019 to 2023 is depicted in Table 5. ChatGPT provided 100% accurate responses for all years for the topic M11B. Additionally, ChatGPT provided 100% accurate responses for topics M11A, M12D, M12F, and M11C for a number of years. In 2022, ChatGPTâs accuracy rate
8
for the M11C topic was 0%. With the exception of the M12A topic on graphs and diagrams, ChatGPTâs accuracy rate for the other topics was rather high.
Table 5: ChatGPTâs performance in question topics
|
2306.06331#30
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 30 |
Re-prompting of LLMs The quality of LLM output is greatly improved with useful context, such as few-shot in- context learning for novel tasks [6]. LLMs for TAMP are typically also provided task-relevant information, such as environment state or admissible actions [10]. Re-prompting with additional context based on LLM output has been shown to be extremely beneficial, such as with iterative action generation [8], environmental feedback [11], inadmissible actions [8], [9], [12], unmet action preconditions [58], [56], code execution errors [59], and syntactic errors in structured output [20]. Our work uses the same syntactic correction re- prompting technique as [20], but we also introduce automatic detection and correction of semantic errors via re-prompting.
# VII. CONCLUSION
|
2306.06531#30
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 30 |
7
Calculation Strategy ASDiv AQuA SVAMP GSM8K Unnormalized Weighted Average Normalized Weighted Average 73.71 73.71 44.09 40.94 74.50 74.60 61.41 61.56 Unnormalized Weighted Sum Normalized Weighted Sum 73.80 73.37 42.52 44.88 74.50 71.30 60.20 59.21 Unnormalized Unweighted Sum (Majority Vote) 75.52 44.09 74.60 61.56
Table 4: Accuracy comparison of different strategies of computing answer probability. The threshold of Diversity Metrics is set to be top 40%.
Analysis of Aggregation Strategies The majority vote method of calculating the answer prob- ability over all sampled rationales can be regarded as taking an unnormalized unweighted sum. As described in Wang et al. [2022], other methods of computing answer probability of a include the unnormalized weighted average, normalized weighted average, unnormalized weighted sum, and normalized weighted sum. More details about the above calculation are provided in Appendix ??. Tab. 4 shows that unnormalized unweighted sum generally outperforms others. We use this setting in all experiments following Wang et al. [2022].
|
2306.07932#30
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 31 |
Table 5: ChatGPTâs performance in question topics
M11C M11B M11A M12A M12B M12C M12D M12E M12F M12G 2023 50 100.00 50.00 30.00 75.00 57.14 83.33 33.33 50.00 2022 0 100.00 50.00 50.00 75.00 71.43 66.67 66.67 66.67 2021 50 100.00 100.00 20.00 75.00 71.43 66.67 66.67 66.67 2020 100 100.00 100.00 46.15 62.50 42.86 100.00 66.67 100.00 2019 100.00 50.00 28.57 71.43 80.00 40.00 80.00 33.33 44.44 62.50 62.50 75.00 50.00
100 ) % 75 ( e c n a m r o f r e P 50 25 0 M11C M11B M11A M12A M12B M12C M12D M12E M12F M12G
Figure 5: ChatGPTâs performance in question topics for 2019-2023.
|
2306.06331#31
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 31 |
# VII. CONCLUSION
This paper presented AutoTAMP, a framework for using pre-trained LLMs as both (1) translators from language task descriptions to formal task specifications (e.g. STL) via few- shot in-context learning and (2) checkers of syntactic and semantic errors via corrective re-prompting, in which we contributed a novel autoregressive re-prompting technique for semantic errors. Our experimental results show using LLMs to translate to task specifications that can be solved via a formal planner outperforms approaches that use LLMs directly as planners when handling tasks with complex geometric and temporal constraints.
We note a few limitations of this work. First, though our results rely on using the best prompt out of several candi- dates, alternatives may elicit better performance. However, we expect the trends between methods to persist even with better prompts, supporting the conclusion that LLMs are not well suited for directly solving complex TAMP. Second, the cost of planning time is high, especially when there are multiple iterations of re-prompting. Further work is needed to address the runtime of formal planners and LLM inference. Third, the STL planner used in this work is not immediately applicable to manipulation tasks due to the optimization methods used in the planner; however, our approach does not depend on this specific planner, and we believe it can be integrated with STL planners more suitable for such TAMP domains.
# REFERENCES
|
2306.06531#31
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 31 |
an unnormalized unweighted sum. answer probability of a include average, unnormalized weighted sum, Accuracy for Different Number of Rationales ee â- Self-Consistency -e Mes Si 10 15) 2025, 90 39 40) Number of Rationales
Analysis of the Number of Sampled Rationales We test the accuracy with respect to varying the number of rationales (i.e., 5, 10, 15, 20, 25, 30, 35, 40) in Fig. 6. The results are arithmetic reasoning accuracy on SingleEq. For a fair comparison, both MCS and Self-consistency use the same prompts as in Wei et al. [2022]. Both MCS and Self-consistency use the same 5 rationales sampled from the decoder. In our experiments, the threshold of Diversity Metrics is set to be top 40%. The results show that MCS generally outperforms self-consistency and benefits from the increasing number of sampled rationales.
# Figure 6: Experiments of different numbers of rationales.
# 4.5 Balancing Cost and Utility
|
2306.07932#31
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 32 |
Figure 5: ChatGPTâs performance in question topics for 2019-2023.
Recently, a lot of attention has been paid to how well AI models perform, particularly when answering questions. Figure 5 provides an informative examination of ChatGPTâs accuracy in responding to various query kinds over the period of 2019â2023. The findings show that ChatGPTâs accuracy varies depending on the type of question being answered. In particular, ChatGPT answered M11C questions with an accuracy rate of 0â100%, M11B questions with 100%, M11A questions with 50â100%, M12A questions with 20â50%, M12B questions with 62â75%, M12C questions with 42â80%, M12D questions with 40â100%, M12E questions with 33â80%, M12F questions with 33â100%, and M12G questions with 44â75%.
The level of difficulty of the questions, the number and quality of training data, and the modelâs internal architecture are just a few of the variables that can affect how well ChatGPT performs while answering these questions. Therefore, comprehending the variations in performance across various question types can offer insights into the modelâs advantages and disadvantages as well as guide future developments to enhance its performance.
|
2306.06331#32
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 32 |
# REFERENCES
[1] C. R. Garrett, R. Chitnis, R. Holladay, B. Kim, T. Silver, L. P. Kael- bling, and T. Lozano-P´erez, âIntegrated task and motion planning,â Annual review of control, robotics, and autonomous systems, vol. 4, pp. 265â293, 2021.
[2] M. Fox and D. Long, âPddl2. 1: An extension to pddl for expressing temporal planning domains,â Journal of artificial intelligence research, vol. 20, pp. 61â124, 2003.
[3] E. A. Emerson, âTemporal and modal logic,â in Formal Models and Semantics. Elsevier, 1990, pp. 995â1072.
[4] K. He, M. Lahijanian, L. E. Kavraki, and M. Y. Vardi, âTowards manipulation planning with temporal logic specifications,â in 2015 IEEE International Conference on Robotics and Automation (ICRA), 2015, pp. 346â352.
|
2306.06531#32
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 32 |
# Figure 6: Experiments of different numbers of rationales.
# 4.5 Balancing Cost and Utility
Plans Time Money Acc. Utility(User Satis.) Human 60s $0.125 93.20 86.40 CoT Prompting Self-Consistency (Nself â 10) 0.8s 8s $0.080 85.04 $0.800 92.49 81.60 85.80 10.8s $0.4925 91.00 MCS (NM CS â 5, α â 20%) MCS + Self-consistency (NM CS â 5, α â 20%) 10.8s $0.4925 93.50 84.20 88.80 MCS (NM CS â 5, α â 40%) 16.8s $0.505 92.51 MCS + Self-consistency (NM CS â 5, α â 40%) 16.8s $0.505 94.09 85.60 90.80
Table 5: Analysis of cost and utility for SingleEq. MCS + Self-consistency generally outperforms other methods with higher utility and acceptable cost. N¨: # sampled rationale. α: DE threshold. Acc.: Accuracy. User Satis.: User Satisfaction. More details are shown in Appendix G.
|
2306.07932#32
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 33 |
A thorough analysis of ChatGPTâs performance on various levels and topics is presented in Table 6. First, consider the difficulty of the questions; ChatGPT was able to accurately respond to 85 of 103 questions at level K. Out of 77 questions at level C, 48 were correctly answered by ChatGPT. Only 12 of the 49 questions in level A could be correctly answered by ChatGPT, while only 3 of the 29 questions in level H could be answered by ChatGPT. Second, ChatGPTâs performance varied depending on the type of question. For M11A, M11B, M11C, and M12A, ChatGPT correctly answered 7 out of 10 questions, 5 out of 5 questions, 4 out of 8 questions, and 20 out of 57 questions, respectively. For M12B, M12C, M12D, M12E, M12F, and M12G, respectively, ChatGPT correctly answered 28 out of 39 questions, 21 out of 33 questions, 18 out of 26 questions, 11 out of 16 questions, 9 out of 15 questions, and 24 out of 41 questions.
|
2306.06331#33
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 33 |
[5] R. E. Fikes and N. J. Nilsson, âStrips: A new approach to the application of theorem proving to problem solving,â Artificial Intelligence, vol. 2, no. 3, pp. 189â208, 1971. [Online]. Available: https://www.sciencedirect.com/science/article/pii/0004370271900105 [6] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell, et al., âLanguage models are few-shot learners,â Advances in neural information pro- cessing systems, vol. 33, pp. 1877â1901, 2020.
[7] J. Wu, R. Antonova, A. Kan, M. Lepert, A. Zeng, S. Song, J. Bohg, S. Rusinkiewicz, and T. Funkhouser, âTidybot: Personal- ized robot assistance with large language models,â arXiv preprint arXiv:2305.05658, 2023.
|
2306.06531#33
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 33 |
In this section, we conduct experiments on the SingleEq dataset to quantitatively calculate cost and utility for CAMLOP . For the cost, we consider money and time. We set the price of the LLM as pllm and the time cost as tllm. Since we use GPT-3, the price pllm for a single math problem (decoding once) is $0.08 on average, and the time cost tllm is 0.8 second based on empirical results 6. The price of solving a single math problem with only human labor is phuman and the time cost is thuman. We
6The pricing of text-davinci-002 is $0.02 per 1000 tokens, which can be found at https://openai.com/ pricing. We set pllm to be $0.08 because an input sample for few-shot CoT contains about 4000 tokens on average when decoding only once. Note that we only calculated the time for the main part (i.e., the decoding) and ignored other parts that were fast enough to be ignored compared to the API calls.
8
|
2306.07932#33
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 34 |
It is crucial to keep in mind that certain topics only contain questions at the knowledge and comprehension levels that are quite simple to respond to, and ChatGPT did well on these because of its aptitude for natural language creation. Therefore, ChatGPTâs high scores on these topics do not necessarily reflect its understanding of mathematics or capacity for reasoning. Furthermore, it is challenging to give a precise rating solely based on topics because some topics have
9
a preponderance of knowledge-level questions. Additionally, due to a lack of information, ChatGPT might not be able to respond to some knowledge-level questions. As an illustration, many questions in the topic of derivatives and applications (M12A) call for the interpretation of graphs or variable tables, which ChatGPT is unable to read from photos at this time. As a result, ChatGPT might be unable to respond to some inquiries that require an understanding of this subject. These findings show that ChatGPT has diverse degrees of competence in various math specialties. In general, ChatGPT performed well for some question types but poorly for others.
Table 6: ChatGPTâs performance in knowledge matrix
|
2306.06331#34
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 34 |
[8] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch, âLanguage models as zero-shot planners: Extracting actionable knowledge for embodied agents,â in International Conference on Machine Learning. PMLR, 2022, pp. 9118â9147.
[9] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, et al., âDo as i can, not as i say: Grounding language in robotic affordances,â arXiv preprint arXiv:2204.01691, 2022.
[10] I. Singh, V. Blukis, A. Mousavian, A. Goyal, D. Xu, J. Tremblay, D. Fox, J. Thomason, and A. Garg, âProgPrompt: Generating situated task plans using large language models,â in International robot Conference on Robotics and Automation (ICRA), 2023. [Online]. Available: https://arxiv.org/abs/2209.11302
|
2306.06531#34
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 34 |
8
set phuman to be $0.125 and thuman to be 60 seconds based on our empirical results. 7 The price of human labor for MCS to correct a single math problem pM CS is $0.0625 and the time cost tM CS is 30 seconds based on empirical results. Note the time required to inspect and correct is less than the time needed to fully solve the entire problem, therefore tM CS Ä thuman.
For the utility, we consider user satisfaction as the comprehensive score. We ask five users to write down their satisfaction levels and calculate the average 8. We also perform regression analysis on user satisfaction based on LLM and Human and ultimately learn the utility function upxllm, xhumanq â llm Ë x1.94 x2.05 We experiment on five candidate plans based on models from Sec. 4.2 and Sec. 4.4 (Fig. 4 and Fig. 6):
1. Human: A plan that requires only human labor, which costs phuman and thuman seconds.
2. CoT-prompting: A naive CoT plan that only requires GPT-3 for decoding only once, which costs pllm and tllm seconds.
|
2306.07932#34
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 35 |
Table 6: ChatGPTâs performance in knowledge matrix
M11C M11B M11A M12A M12B M12C M12D M12E M12F M12G LEVEL K 1 5 5 12 15 12 8 7 7 13 85 83% C 2 1 6 11 7 8 2 1 10 48 62% A 1 1 0 2 2 2 1 1 1 11 27% H 0 2 0 0 0 1 0 3 10% TOPIC 4 50% 5 100% 7 70% 20 35% 28 72% 21 64% 18 69% 11 65% 9 64% 24 59% 147 58.80%
100 100 100 100 100 100 100 80 ak aC 60 aA 40 eH 20 0 â-~â=â_â = MI11IC M11IB MIIA M12A M12B M12C M12D M12E M12F M12G
Figure 6: Distribution of the percentage of correct answer in levels and topics.
|
2306.06331#35
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 35 |
[11] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al., âInner monologue: Embodied reasoning through planning with language models,â arXiv preprint arXiv:2207.05608, 2022.
[12] K. Lin, C. Agia, T. Migimatsu, M. Pavone, and J. Bohg, âText2motion: From natural language instructions to feasible plans,â arXiv preprint arXiv:2303.12153, 2023.
[13] Y. Ding, X. Zhang, C. Paxton, and S. Zhang, âTask and motion planning with large language models for object rearrangement,â arXiv preprint arXiv:2303.06247, 2023.
[14] N. Wake, A. Kanehira, K. Sasabuchi, J. Takamatsu, and K. Ikeuchi, âChatgpt empowered long-step robot control in various environments: A case application,â arXiv preprint arXiv:2304.03893, 2023.
|
2306.06531#35
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 35 |
2. CoT-prompting: A naive CoT plan that only requires GPT-3 for decoding only once, which costs pllm and tllm seconds.
3. Self-consistency: A Self-consistency plan that requires only LLMs to sample from the decoder Nself times, which will cost Nself Ë pllm and Nself Ë tllm seconds.
4. MCS : MCS samples from LLM decoder NM CS times and uses top α as threshold, requiring pNM CS ` 1q Ë pllm ` α Ë pM CS and pNM CS ` 1q Ë tllm ` α Ë tM CS seconds.
5. MCS + Self-consistency: A MCS + Self-consistency plan that requires to sample from the decoder NM CS times, which costs the same as the MCS plan.
The results are shown in Tab. 5. The result shows that MCS +Self-consistency generally outperforms other methods with higher utility (i.e., better user satisfaction) as well as an acceptable cost.
# 5 Related Work
|
2306.07932#35
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 36 |
These results collectively imply that while ChatGPT might be a valuable tool for addressing math-related queries, its accuracy varies between topics and levels. As a result, significant advancements are required to increase ChatGPTâs math question-answering ability, especially in more difficult math subfields. Figure 6 presents a more thorough breakdown of the percentage of right responses by difficulty level and topic so that users of ChatGPT can better understand how well it performs. For instance, in the case of M12G, ChatGPT attained a high accuracy rate of 76% for questions at the K level, followed by 67% for questions at the C level, 25% for questions at the A level, and 0% for questions at the H level. Notably, ChatGPT achieved a flawless accuracy rate of 100% when responding to questions at the K level for M11A, M11B, M11C, M12B, M12D, and M12F. Additionally, ChatGPT was able to correctly respond to H-level questions for M12A (Derivatives and Applications) and M12E (Polyhedron), demonstrating its competency in handling more difficult questions in these topics. These results indicate that the topic and
|
2306.06331#36
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 36 |
[15] K. Valmeekam, A. Olmo, S. Sreedharan, and S. Kambhampati, âLarge language models still canât plan (a benchmark for llms on planning and reasoning about change),â arXiv preprint arXiv:2206.10498, 2022. [16] B. Liu, Y. Jiang, X. Zhang, Q. Liu, S. Zhang, J. Biswas, and P. Stone, âLlm+ p: Empowering large language models with optimal planning proficiency,â arXiv preprint arXiv:2304.11477, 2023.
[17] Y. Xie, C. Yu, T. Zhu, J. Bai, Z. Gong, and H. Soh, âTranslating natural language to planning goals with large-language models,â arXiv preprint arXiv:2302.05128, 2023.
[18] J. Pan, G. Chou, and D. Berenson, âData-efficient learning of natural language to linear temporal logic translators for robot task specifica- tion,â arXiv preprint arXiv:2303.08006, 2023.
|
2306.06531#36
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 36 |
# 5 Related Work
The human-in-the-Loop system, aiming to achieve what neither humans nor machines can accomplish independently, is defined as a model requiring human interaction [Karwowski, 2006]. When the machine cannot solve the problem, or when cost or security considerations require humans to participate, manual intervention is necessary [Bien et al., 2018, Wu et al., 2022, Zanzotto, 2019, Mosqueira-Rey et al., 2023]. Previous human-in-the-loop systems focus either on adding appropriate tags to data or providing feedback on cases with a certain confidence interval to the machines and thus retrain the model afterward with the labeled data or rewarded cases [Wu et al., 2022, Zanzotto, 2019].
|
2306.07932#36
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 37 |
and Applications) and M12E (Polyhedron), demonstrating its competency in handling more difficult questions in these topics. These results indicate that the topic and difficulty level have an impact on ChatGPTâs accuracy, and that ChatGPT performs differently depending on how these two factors are coupled. These findings suggest that these particular issues contain linguistic nuances or complexities that the model was unable to
|
2306.06331#37
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 37 |
[19] C. R. Garrett, T. Lozano-P´erez, and L. P. Kaelbling, âPddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning,â in Proceedings of the International Conference on Automated Planning and Scheduling, vol. 30, 2020, pp. 440â448. [20] M. Skreta, N. Yoshikawa, S. Arellano-Rubach, Z. Ji, L. B. Kristensen, K. Darvish, A. Aspuru-Guzik, F. Shkurti, and A. Garg, âErrors are useful prompts: Instruction guided task programming with verifier- assisted iterative prompting,â arXiv preprint arXiv:2303.14100, 2023. [21] Y. Chen, R. Gandhi, Y. Zhang, and C. Fan, âNl2tl: Transforming natural languages to temporal logics using large language models,â arXiv preprint arXiv:2305.07766, 2023.
|
2306.06531#37
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 37 |
Recently, LLM-based AI (Artificial Intelligence) systems are developing very quickly, and this trend is expected to expand to the majority of the workforce in the near future [Ouyang et al., 2022, Zhang et al., 2022, Sanh et al., 2021]. However, these systems do not always provide satisfactory answers without human intervention. Additionally, in domains such as criminal fact identification and charge predictions, inference should be reasonable and controlled by humans [Custers, 2022] while LLMs are not qualified. Therefore, it is essential to develop a human-in-the-loop prompting-based system that is designed with the ability to collaborate with humans. Until recently, few researchers have systematically and quantitatively explored human-in-the-loop prompting-based systems. Different from ChatGPTâs RLHF (i.e., Reinforcement Learning from Human Feedback) 9, we take the first step to use human feedback in an online way without access to parameters. Even though itâs a preliminary step, this online method could benefit from further refinement and combination with RLHF in future research.
7Minimum hourly wage in the United States is $7.5, which can be found at https://www.worker.gov/ pay-for-hours-worked/. Solving a problem requires 60 seconds on average. Therefore, the price and time cost required to complete a problem are $0.125 and 60 seconds, respectively.
|
2306.07932#37
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 38 |
10
adequately capture. This result highlights the need for ongoing study to enhance the modelâs ability to handle a variety of linguistic complexities. This shortcoming might be brought on by the lack of training data or the intrinsic intricacy of the queries at this level.
By evaluating how well language modelsâlike ChatGPTâcan respond to questions of varying degrees of cognitive complexity, one can assess the performance of these models. Knowledge, understanding, application, and strong application are the four categories for the levels of cognitive difficulty in answering questions. The ability to recognize and identify concepts, content, and issues is referred to as the recognition level. Understanding fundamental ideas and being able to articulate them in oneâs own words are requirements for the comprehension level. The application level necessitates applying concepts in unfamiliar or comparable circumstances. The high application level requires the capacity to apply fundamental ideas to an entirely new challenge.
|
2306.06331#38
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 38 |
[22] O. Maler and D. Nickovic, âMonitoring temporal properties of con- tinuous signals,â in Formal Techniques, Modelling and Analysis of Timed and Fault-Tolerant Systems: Joint International Conferences on Formal Modeling and Analysis of Timed Systmes, FORMATS 2004, and Formal Techniques in Real-Time and Fault-Tolerant Systems, FTRTFT 2004, Grenoble, France, September 22-24, 2004. Proceed- ings. Springer, 2004, pp. 152â166.
[23] D. Sun, J. Chen, S. Mitra, and C. Fan, âMulti-agent motion plan- ning from signal temporal logic specifications,â IEEE Robotics and Automation Letters, vol. 7, no. 2, pp. 3451â3458, 2022.
[24] C. Finucane, G. Jing, and H. Kress-Gazit, âLtlmop: Experimenting with language, temporal logic and robot control,â in 2010 IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE, 2010, pp. 1988â1993.
|
2306.06531#38
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 38 |
8See Appendix for more details about user satisfaction. The impact of accuracy on user satisfaction is much larger than the time cost, we speculate that most users care more about the accuracy of solving the problem than the time cost, as SingleEq is a math-solving dataset.
# 9https://openai.com/blog/chatgpt.
9
# 6 Conclusion
We propose the MCS to explore how manual correction of rationales can improve LLMâs reasoning ability. Then, we propose CAMLOP to quantitatively and systematically analyze and balance the cost and the corresponding utility. Experiments demonstrate that our MCS significantly outperforms strong baselines including the CoT prompting approach and Self-consistency approach and obtains the optimal balance between cost and utility.
# References
David H Ackley, Geoffrey E Hinton, and Terrence J Sejnowski. A learning algorithm for boltzmann machines. Cognitive science, 9(1):147â169, 1985.
Sharat Agarwal, Himanshu Arora, Saket Anand, and Chetan Arora. Contextual diversity for active learning. In European Conference on Computer Vision, pages 137â153. Springer, 2020.
|
2306.07932#38
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 39 |
The effectiveness of ChatGPT was assessed by counting how many questions at each level of cognitive difficulty it correctly answered. Figure 7 demonstrates that ChatGPT properly identified and recognized 83% of the ideas in the recognition level of the questions that were asked. 62% of the questions at the comprehension level were correctly answered by ChatGPT, demonstrating an adequate understanding of the fundamental ideas. At the application level, where it could only accurately answer 27% of the questions, its performance deteriorated dramatically. Only 10% of the questions were correctly answered by ChatGPT at the highest cognitive complexity level, the high application level, demonstrating a limited capacity to apply fundamental ideas to novel problems.
83 80% 62 60% 40% 27 20% 10 K C A H
Figure 7: ChatGPTâs performance in question levels.
According to this performance evaluation, ChatGPT may have some restrictions when it comes to employing newly learned concepts in novel contexts. By giving language models more sophisticated and advanced problem-solving abilities, future language model development might concentrate on enhancing the modelsâ capacity to solve novel challenges. The performance of language models at the application and high application levels may also be enhanced by additional training data and focused training techniques, enabling them to more effectively apply acquired concepts in real-world circumstances.
|
2306.06331#39
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 39 |
[25] S. Wilson, P. Glotfelter, L. Wang, S. Mayya, G. Notomista, M. Mote, and M. Egerstedt, âThe robotarium: Globally impactful opportunities, challenges, and lessons learned in remote-access, distributed control of multirobot systems,â IEEE Control Systems Magazine, vol. 40, no. 1, pp. 26â44, 2020.
[26] S. M. LaValle, Planning algorithms. Cambridge university press, 2006.
[27] J. Ferrer-Mestres, G. Frances, and H. Geffner, âCombined task and motion planning as classical ai planning,â arXiv preprint arXiv:1706.06927, 2017.
[28] C. R. Garrett, T. Lozano-Perez, and L. P. Kaelbling, âFfrob: Lever- aging symbolic planning for efficient task and motion planning,â The International Journal of Robotics Research, vol. 37, no. 1, pp. 104â 136, 2018.
|
2306.06531#39
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 39 |
Aida Amini, Saadia Gabriel, Peter Lin, Rik Koncel-Kedziorski, Yejin Choi, and Hannaneh Hajishirzi. Mathqa: Towards interpretable math word problem solving with operation-based formalisms. arXiv preprint arXiv:1905.13319, 2019.
BENJ EDWARDS. Ibm plans to replace 7,800 jobs with ai over time, pauses hiring certain po- sitions, IBM CEO Arvind Krishna says he could see 30% of back-office functions replaced by AI over 5 years., 2023. https://arstechnica.com/information-technology/2023/05/ ibm-pauses-hiring-around-7800-roles-that-could-be-replaced-by-ai/.
Nicholas Bien, Pranav Rajpurkar, Robyn L Ball, Jeremy Irvin, Allison Park, Erik Jones, Michael Bereket, Bhavik N Patel, Kristen W Yeom, Katie Shpanskaya, et al. Deep-learning-assisted diagnosis for knee magnetic resonance imaging: development and retrospective validation of mrnet. PLoS medicine, 15(11):e1002699, 2018.
|
2306.07932#39
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 40 |
Figure 8 demonstrates the astounding 100% correct answer rate for the M11B question that ChatGPT attained. Itâs crucial to remember that this particular topic only included K-type questions. The correct answer rates for the remaining topics ranged from 58.89% for M12G to 71.79% for M12B. Notably, M11C and M12A had the lowest rates of correctly answered questions. Most questions were in M12A, and the majority of them were at the K-level. The lack of information in the figure, however, prevented ChatGPT from being able to respond to all questions. Similarly, ChatGPT did not show much promise for topics like M11C on spatial geometry and M12G on spatial analysis Oxyz.
However, if we ignore the questions that required information from the figure, ChatGPT demonstrated a solid capacity to respond correctly for more than 50% of all topics. This indicates that ChatGPT shows potential in some areas of the evaluated topics, but it may need more work to succeed in other areas that require more intricate inference and data interpretation.
# 4.4 ChatGPTâs performance in VNHSGE and other exams
|
2306.06331#40
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 40 |
[29] A. Akbari, J. Rosell, et al., âTask planning using physics-based heuris- tics on manipulation actions,â in 2016 IEEE 21st International Con- ference on Emerging Technologies and Factory Automation (ETFA). IEEE, 2016, pp. 1â8.
[30] F. Lagriffoul and B. Andres, âCombining task and motion planning: A culprit detection problem,â The International Journal of Robotics Research, vol. 35, no. 8, pp. 890â927, 2016.
[31] J. Wolfe, B. Marthi, and S. Russell, âCombined task and motion planning for mobile manipulation,â in Proceedings of the International Conference on Automated Planning and Scheduling, vol. 20, 2010, pp. 254â257.
[32] S. Srivastava, E. Fang, L. Riano, R. Chitnis, S. Russell, and P. Abbeel, âCombined task and motion planning through an extensible planner- independent interface layer,â in 2014 IEEE international conference on robotics and automation (ICRA).
|
2306.06531#40
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.06331
| 41 |
# 4.4 ChatGPTâs performance in VNHSGE and other exams
We evaluated ChatGPTâs success rate in a number of well-known math competitions, as reported by OpenAI [27] and shown in Figure 9, to determine its suitability for the VNHSGE mathematics exam. With a success percentage of 70%, ChatGPTâs performance in the SAT Math competition is better than its performance in the VNHSGE mathematics exam, according to our study. With rates of 40% for AP Statistics, 25% for the GRE Quantitative, 10% for AMC 10,
11
M11B M12B M12D M11A M12E M12C M12F M12G M11C M12A 0 20 40 60 80 100
Figure 8: ChatGPTâs performance in question topics.
|
2306.06331#41
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 41 |
[33] M. Colledanchise, D. Almeida, and P. ¨Ogren, âTowards blended reac- tive planning and acting using behavior trees,â in 2019 International Conference on Robotics and Automation (ICRA). IEEE, 2019, pp. 8839â8845.
[34] L. P. Kaelbling and T. Lozano-P´erez, âIntegrated task and motion planning in belief space,â The International Journal of Robotics Research, vol. 32, no. 9-10, pp. 1194â1227, 2013.
[35] E. Fernandez-Gonzalez, B. Williams, and E. Karpas, âScottyactivity: Mixed discrete-continuous planning with convex optimization,â Jour- nal of Artificial Intelligence Research, vol. 62, pp. 579â664, 2018.
[36] K. He, M. Lahijanian, L. E. Kavraki, and M. Y. Vardi, âTowards manipulation planning with temporal logic specifications,â in 2015 IEEE international conference on robotics and automation (ICRA). IEEE, 2015, pp. 346â352.
|
2306.06531#41
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 41 |
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In H. Larochelle, M. Ranzato, R. Hadsell, M.F. Balcan, and H. Lin, editors, Advances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc., 2020a. URL https://proceedings. neurips.cc/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf.
|
2306.07932#41
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 42 |
Figure 8: ChatGPTâs performance in question topics.
4% for AMC 12, and only 1% for AP Calculus BC, ChatGPT performed much worse in the other competitions. It is important to note that these comparisons are just meant to be used as a guide because there are variations among math examinations in terms of their formats, structures, levels, and question kinds. As a result, it is impossible to assess the complexity of the VNHSGE exam just by looking at ChatGPTâs performance in other competitions. However, this comparison provides a general idea of the VNHSGE examâs level of difficulty in relation to other math competitions.
SAT Math VNHSGE Mathematics AP Statistics GRE Quantitative AMC 10 AMC 12 AP Calculus BC 0 20 40 60
Figure 9: ChatGPTâs performance in VNHSGE mathematics and other exams.
# 4.5 ChatGPTâs performance and Vietnamese students
|
2306.06331#42
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 42 |
[37] M. Katayama, S. Tokuda, M. Yamakita, and H. Oyama, âFast ltl- based flexible planning for dual-arm manipulation,â in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2020, pp. 6605â6612.
[38] R. Takano, H. Oyama, and M. Yamakita, âContinuous optimization- based task and motion planning with signal temporal logic speci- fications for sequential manipulation,â in 2021 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 2021, pp. 8409â8415.
[39] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa, âLarge language models are zero-shot reasoners,â in ICML 2022 Workshop on Knowledge Retrieval and Language Models, 2022. [Online]. Available: https://openreview.net/forum?id=6p3AuaHAFiN
|
2306.06531#42
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 42 |
Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020b.
Karl Cobbe, Vineet Kosaraju, Mohammad Bavarian, Jacob Hilton, Reiichiro Nakano, Christopher Hesse, and John Schulman. Training verifiers to solve math word problems. arXiv preprint arXiv:2110.14168, 2021.
Bart Custers. Ai in criminal law: An overview of ai applications in substantive and procedural criminal law. Law and Artificial Intelligence, pages 205â223, 2022.
Angela Fan, Mike Lewis, and Yann Dauphin. Hierarchical neural story generation. arXiv preprint arXiv:1805.04833, 2018.
Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. arXiv preprint arXiv:1707.02633, 2017.
|
2306.07932#42
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 43 |
Figure 9: ChatGPTâs performance in VNHSGE mathematics and other exams.
# 4.5 ChatGPTâs performance and Vietnamese students
Figure 10-14 compare ChatGPT math scores across four yearsâspecifically, 2019, 2020, 2021, 2022 and 2023âwith Vietnamese studentsâ scores. Notably, the findings show that across the investigated years, ChatGPT math scores have consistently been lower than those of the majority of Vietnamese pupils. Additional performance data analysis can shed light on potential causes of the performance gap between ChatGPT and human students. There may be a variance in performance due to elements such various learning styles and approaches, resource accessibility, and cultural background. Additionally, with additional training and model improvement, ChatGPTâs performance might be enhanced.
Another key drawback of this AI model is ChatGPTâs inability to access, read, and comprehend graphical information in test questions. Tables, charts, and other graphical representations of data and information are frequently used in mathematics exams to visually communicate data and information. However, ChatGPTâs inability to interpret graphical data limits its capacity to offer precise answers to this kind of query.
|
2306.06331#43
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 43 |
[40] T. Silver, V. Hariprasad, R. S. Shuttleworth, N. Kumar, T. Lozano- P´erez, and L. P. Kaelbling, âPDDL planning with pretrained large language models,â in NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022. [Online]. Available: https: //openreview.net/forum?id=1QMMUB4zfl
[41] L. S. Zettlemoyer and M. Collins, âLearning to map sentences to logical form: structured classification with probabilistic categorial grammars,â in Proceedings of the Twenty-First Conference on Un- certainty in Artificial Intelligence, 2005, pp. 658â666.
[42] L. Zettlemoyer and M. Collins, âOnline learning of relaxed ccg grammars for parsing to logical form,â in Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL), 2007, pp. 678â687.
|
2306.06531#43
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 43 |
Jessica Ficler and Yoav Goldberg. Controlling linguistic style aspects in neural language generation. arXiv preprint arXiv:1707.02633, 2017.
Mor Geva, Daniel Khashabi, Elad Segal, Tushar Khot, Dan Roth, and Jonathan Berant. Did aristotle use a laptop? a question answering benchmark with implicit reasoning strategies. Transactions of the Association for Computational Linguistics, 9:346â361, 2021.
10
Google Research. Minerva: Solving quantitative reasoning problems with language models, 2023.
Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. Learning to write with cooperative discriminators. arXiv preprint arXiv:1805.06087, 2018.
Ari Holtzman, Jan Buys, Li Du, Maxwell Forbes, and Yejin Choi. The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751, 2019.
Mohammad Javad Hosseini, Hannaneh Hajishirzi, Oren Etzioni, and Nate Kushman. Learning to solve arithmetic word problems with verb categorization. In EMNLP, pages 523â533. Citeseer, 2014.
Waldemar Karwowski. International Encyclopedia of Ergonomics and Human Factors, -3 Volume Set. Crc Press, 2006.
|
2306.07932#43
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 44 |
This restriction is not specific to ChatGPT; many other AI models also have trouble comprehending graphical data. This is so because reading text takes a distinct set of abilities than analyzing images and other visual information. NLP is exploited by text-based AI models like ChatGPT to comprehend and process text-based inputs. In contrast, computer vision techniques are utilized by image-based AI models to comprehend visual inputs.
Enhancing ChatGPTâs capacity to comprehend visual data is one potential means of getting around this restriction. Adding computer vision capabilities to the model or creating a hybrid model that blends NLP and computer vision methods may achieve this. The test format could be changed to eliminate graphical data or to offer alternate text-based representations of the graphical data as a potential alternative. Though it might not always be possible, this solution would necessitate significant modifications to the test design.
12
|
2306.06331#44
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 44 |
[43] Y. W. Wong and R. J. Mooney, âLearning for semantic parsing with statistical machine translation,â in Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics. Association for Computational Linguistics, 2006, pp. 439â446.
[44] J. Dzifcak, M. Scheutz, C. Baral, and P. Schermerhorn, âWhat to do and how to do it: Translating natural language directives into temporal and dynamic logic representation for goal management and action execution,â in 2009 IEEE International Conference on Robotics and Automation.
[45] Y. Artzi and L. Zettlemoyer, âWeakly supervised learning of semantic parsers for mapping instructions to actions,â Transactions of the Association for Computational Linguistics, vol. 1, pp. 49â62, 2013.
[46] T. M. Howard, S. Tellex, and N. Roy, âA natural language planner interface for mobile manipulators,â in 2014 IEEE International Con- ference on Robotics and Automation (ICRA). IEEE, 2014, pp. 6652â 6659.
|
2306.06531#44
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 44 |
Waldemar Karwowski. International Encyclopedia of Ergonomics and Human Factors, -3 Volume Set. Crc Press, 2006.
Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
Rik Koncel-Kedziorski, Hannaneh Hajishirzi, Ashish Sabharwal, Oren Etzioni, and Siena Dumas Ang. Parsing algebraic word problems into equations. Transactions of the Association for Computational Linguistics, 3:585â597, 2015.
Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. Learning to automatically solve algebra word problems. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 271â281, 2014.
Shen-Yun Miao, Chao-Chun Liang, and Keh-Yih Su. A diverse corpus for evaluating and developing english math word problem solvers. arXiv preprint arXiv:2106.15772, 2021.
|
2306.07932#44
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 45 |
t n e d u t S f o r e b m u N 3.5 3 2.5 2 1.5 1 0.5 0 ·104 6 0163 4 9 7 , 5 3 1 4 7 , 5 3 1 6 6 , 5 3 5 3 4 , 5 3 5 9 2 , 5 3 3 0 2 , 5 3 2 8 9 , 4 3 7 8 6 , 4 3 7 3 9 , 3 3 3 5 8 , 3 3 3 1 3 , 2 3 9 9 0 , 2 3 6 8 3 , 1 3 6 8 7 , 9 2 7 6 1 , 9 2 3 7 0 , 8 2 1 7 6 , 6 2 2 6 7 , 5 2 6 8 6 , 4 2 6 0 4 , 3 2 2 0 0 , 2 2 5 2 8 , 1 2 8 5 4 , 0 2 7 3 4 , 9 1 0 7 6 , 8 1 3 5 3 , 7 1 2 6 9 , 5 1 2 4 1 , 4 1 1 7 5 , 1 1 0 7 0 , 9 6 6 5 , 6 1 2 1 , 4 0 5 3 , 2 9 2 2 , 1 2 5 5 8 2 2 4 7 0 4 6 , 7 1 1 8 5 , 3 1 3 4 2 , 0 1 6 8 0 , 7 5 6 7 , 4 4 9 7 , 2 5 6 5 , 1 4 8 6 2 8 1 0 2 . 0 4 . 0 6 . 0 81 .
|
2306.06331#45
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 45 |
[47] A. Boteanu, J. Arkin, T. Howard, and H. Kress-Gazit, âA model for verifiable grounding and execution of complex language instructions,â in Proceedings of the 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems, Oct. 2016.
[48] N. Gopalan, D. Arumugam, L. L. Wong, and S. Tellex, âSequence-to- sequence language grounding of non-markovian task specifications.â in Robotics: Science and Systems, vol. 2018, 2018.
[49] R. Patel, E. Pavlick, and S. Tellex, âGrounding language to non- task specifications.â in markovian tasks with no supervision of Robotics: Science and Systems, vol. 2020, 2020.
[50] H. Kress-Gazit, G. E. Fainekos, and G. J. Pappas, âTranslating structured english to robot controllers,â Advanced Robotics, vol. 22, no. 12, pp. 1343â1359, 2008.
|
2306.06531#45
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 45 |
Eduardo Mosqueira-Rey, Elena Hernández-Pereira, David Alonso-RÃos, José Bobes-Bascarán, and Ãngel Fernández-Leal. Human-in-the-loop machine learning: A state of the art. Artificial Intelligence Review, 56(4):3005â3054, 2023.
Long Ouyang, Jeff Wu, Xu Jiang, Diogo Almeida, Carroll L Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. Training language models to follow instructions with human feedback. arXiv preprint arXiv:2203.02155, 2022.
Arkil Patel, Satwik Bhattamishra, and Navin Goyal. Are nlp models really able to solve simple math word problems? arXiv preprint arXiv:2103.07191, 2021.
Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint
|
2306.07932#45
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06531
| 46 |
[51] J. He, E. Bartocci, D. NiËckovi´c, H. Isakovic, and R. Grosu, âDeepstl: from english requirements to signal temporal logic,â in Proceedings of the 44th International Conference on Software Engineering, 2022, pp. 610â622.
[52] S. Mohammadinejad, J. Thomason, and J. V. Deshmukh, âInteractive learning from natural language and demonstrations using signal tem- poral logic,â arXiv preprint arXiv:2207.00627, 2022.
[53] C. N. Bonial, L. Donatelli, J. Ervin, and C. R. Voss, âAbstract meaning representation for human-robot dialogue,â Proceedings of the Society for Computation in Linguistics, vol. 2, no. 1, pp. 236â246, 2019. [54] S. Tellex, N. Gopalan, H. Kress-Gazit, and C. Matuszek, âRobots that use language,â Annual Review of Control, Robotics, and Autonomous Systems, vol. 3, pp. 25â55, 2020.
|
2306.06531#46
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 46 |
Subhro Roy and Dan Roth. arXiv:1608.01413, 2016. Solving general arithmetic word problems. arXiv preprint
Victor Sanh, Albert Webson, Colin Raffel, Stephen H Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Teven Le Scao, Arun Raja, et al. Multitask prompted training enables zero-shot task generalization. arXiv preprint arXiv:2110.08207, 2021.
Nan Shao, Zefan Cai, Chonghua Liao, Yanan Zheng, Zhilin Yang, et al. Compositional task representations for large language models. In The Eleventh International Conference on Learning Representations, 2023.
Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R Brown, Adam Santoro, Aditya Gupta, Adrià Garriga-Alonso, et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models. arXiv preprint arXiv:2206.04615, 2022.
|
2306.07932#46
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06531
| 47 |
[55] J. X. Liu, Z. Yang, B. Schornstein, S. Liang, I. Idrees, S. Tellex, and A. Shah, âLang2LTL: Translating natural language commands to temporal specification with large language models,â in Workshop on Language and Robotics at CoRL 2022, 2022. [Online]. Available: https://openreview.net/forum?id=VxfjGZzrdn
[56] L. Guan, K. Valmeekam, S. Sreedharan, and S. Kambhampati, âLeveraging pre-trained large language models to construct and uti- lize world models for model-based task planning,â arXiv preprint arXiv:2305.14909, 2023.
[57] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng, âCode as policies: Language model programs for embodied control,â arXiv preprint arXiv:2209.07753, 2022.
|
2306.06531#47
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 47 |
Alon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. Commonsenseqa: A question answering challenge targeting commonsense knowledge. arXiv preprint arXiv:1811.00937, 2018.
Hal R. Varian. Intermediate microeconomics: a modern approach. New York :W.W. Norton Company, 2014.
11
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, and Denny Zhou. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171, 2022.
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Ed Chi, Quoc Le, and Denny Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
Xingjiao Wu, Luwei Xiao, Yixuan Sun, Junhang Zhang, Tianlong Ma, and Liang He. A survey of human-in-the-loop for machine learning. Future Generation Computer Systems, 2022.
Yi Yang, Zhigang Ma, Feiping Nie, Xiaojun Chang, and Alexander G Hauptmann. Multi-class active learning by uncertainty sampling with diversity maximization. International Journal of Computer Vision, 113(2):113â127, 2015.
|
2306.07932#47
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 48 |
4 3 2 1 ·104 7 11394 8 6 8 , 1 4 7 6 5 , 1 4 5 7 0 , 1 4 7 9 2 , 0 4 6 9 5 , 9 3 0 6 2 , 8 3 3 8 7 , 6 3 5 2 6 , 5 3 3 7 2 , 4 3 2 9 7 , 2 3 7 2 9 , 1 3 4 9 8 , 9 2 8 8 0 , 8 2 7 3 2 , 7 2 7 8 2 , 6 2 3 4 9 , 4 2 2 0 5 , 3 2 3 6 3 , 2 2 1 8 9 , 0 2 9 0 6 , 9 1 3 3 4 , 9 1 6 3 1 , 8 1 9 1 7 , 6 1 5 7 4 , 5 1 0 0 4 , 4 1 7 0 1 , 3 1 8 4 2 , 2 1 0 3 3 , 1 1 6 8 0 , 1 1 3 7 5 , 0 1 5 4 6 , 9 0 9 1 , 9 2 5 4 , 8 5 2 7 , 7 2 9 7 , 6 2 4 6 , 5 1 2 4 , 4 2 9 0 , 3 9 8 1 , 2 2 1 2 , 1 8 6 6 2 9 2 4 3 1 0 0 2 . 0 4 . 0 6 . 0 81 . 0 2 . 1 4 . 1 6 . 1 82 . 1 2 . 2 4 . 2 6 . 2 83 . 2 2 . 3 4 . 3 6 . 3 84 . 3 2
|
2306.06331#48
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06531
| 48 |
Idrees, D. Paulius, and S. Tellex, âPlanning with large language models via corrective re-prompting,â in NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022. [Online]. Available: https://openreview.net/ forum?id=cMDMRBe1TKs
[59] T. Silver, S. Dan, K. Srinivas, J. B. Tenenbaum, L. P. Kaelbling, and M. Katz, âGeneralized planning in pddl domains with pretrained large language models,â arXiv preprint arXiv:2305.11014, 2023.
|
2306.06531#48
|
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
|
For effective human-robot interaction, robots need to understand, plan, and
execute complex, long-horizon tasks described by natural language. Recent
advances in large language models (LLMs) have shown promise for translating
natural language into robot action sequences for complex tasks. However,
existing approaches either translate the natural language directly into robot
trajectories or factor the inference process by decomposing language into task
sub-goals and relying on a motion planner to execute each sub-goal. When
complex environmental and temporal constraints are involved, inference over
planning tasks must be performed jointly with motion plans using traditional
task-and-motion planning (TAMP) algorithms, making factorization into subgoals
untenable. Rather than using LLMs to directly plan task sub-goals, we instead
perform few-shot translation from natural language task descriptions to an
intermediate task representation that can then be consumed by a TAMP algorithm
to jointly solve the task and motion plan. To improve translation, we
automatically detect and correct both syntactic and semantic errors via
autoregressive re-prompting, resulting in significant improvements in task
completion. We show that our approach outperforms several methods using LLMs as
planners in complex task domains. See our project website
https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
|
http://arxiv.org/pdf/2306.06531
|
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
|
cs.RO, cs.CL, cs.HC
|
8 pages, 4 figures
| null |
cs.RO
|
20230610
|
20230927
|
[
{
"id": "1706.06927"
},
{
"id": "2207.00627"
},
{
"id": "2305.14909"
},
{
"id": "2305.07766"
},
{
"id": "2304.11477"
},
{
"id": "2304.03893"
},
{
"id": "2204.01691"
},
{
"id": "2305.05658"
},
{
"id": "2207.05608"
},
{
"id": "2303.08006"
},
{
"id": "2305.11014"
},
{
"id": "2303.06247"
},
{
"id": "2303.14100"
},
{
"id": "2303.12153"
},
{
"id": "2206.10498"
},
{
"id": "2302.05128"
},
{
"id": "2209.07753"
}
] |
2306.07932
| 48 |
Fabio Massimo Zanzotto. Human-in-the-loop artificial intelligence. Journal of Artificial Intelligence Research, 64:243â252, 2019.
Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
# A Experiments for Filtering Stage
After the first stage, the LLM samples plausible rationales for a problem that arrive at different answers. Just like humans, LLMs may make countless and various mistakes, but there are only a limited number of correct rationales for the right result. If most of the sampled rationales cannot make agreements, with a high probability this sample is wrongly predicted. To empirically prove that, we conduct quantitative experiments and discover that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems.
|
2306.07932#48
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 49 |
Specifically, the LLM is prompted with a set of manually written CoT exemplars following Wei et al. [2022] in the first stage. Then, we sample a set of 5 candidate outputs from the LLMâs decoder to generate a set of rationales. Based on the sampled rationales, we divide the samples into two parts: Part 1 has all sampled rationales pointing to the same final answer (i.e., the Diversity Entropy score as Sec. 2.1 of such samples should be equal to 0); Part 2 has sampled rationales pointing to different final answers, which is the part outside the first part of samples (i.e., the Diversity Entropy score as Sec. 2.1 of such samples should be greater than 0). Next, we calculate the accuracy of Part 1 and Part 2 for each dataset separately. We use the first answer of each sample as the result of CoT- Prompting and use all five answers to calculate the Diversity Entropy score. The results are shown in Tab. 6, Tab. 7, Tab. 8 and Tab. 9. The accuracy of Part 1 is generally larger than Part 2. It demonstrates the superiority of Diversity Entropy and experimentally confirms the intuition that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems.
# B Experiments for Correction Stage
|
2306.07932#49
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 50 |
# B Experiments for Correction Stage
B.1 Incorrect Rationale Could Output the Correct Final Answer after Manually Correcting the Erroneous Rationale.
An incorrect rationale could output the correct final answer after correcting the erroneous rationale. To empirically prove this, we conduct quantitative experiments for twelve datasets and discover that in general most of the errors of CoT indeed are caused by incorrect rationales. After correcting these incorrect rationales, the final answers turn out to be correct.
Specifically, we explored the limits of the CoT-based methods (namely CoT-Prompting, Self- Consistency, and MCS) when humans correct rationales while disregarding cost. Humans were instructed to thoroughly check all samples and ensure the correctness of all rationales. Tables 10 and 11 present the results, where the upper bound of CoT-Prompting is denoted as CoT-Upperbound and the upper bound of Self-Consistency is denoted as SC-Upperbound. Self Consistency and MCS+Self Consistency have the same upper bound in extreme cases (i.e., the threshold of Diversity Entropy
12
|
2306.07932#50
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 51 |
t n e d u t S f o r e b m u N 5 4 3 2 1 0 ·104 1 1001 2 2 2 7 9 , 3 5 3 3 1 , 3 5 7 4 9 , 2 5 2 3 5 , 0 5 9 2 9 , 9 4 1 0 4 , 6 4 5 5 8 , 4 4 1 9 4 , 3 4 8 7 9 , 9 3 3 4 9 , 7 3 9 2 2 , 7 3 4 7 9 , 4 3 7 7 8 , 2 3 0 1 2 , 1 3 5 2 7 , 9 2 2 6 5 , 9 2 1 1 0 , 8 2 1 1 7 , 6 2 3 4 9 , 4 2 1 0 3 , 3 2 0 3 7 , 1 2 7 4 1 , 0 2 3 9 5 , 9 1 8 2 9 , 7 1 9 1 5 , 6 1 6 8 9 , 4 1 4 5 4 , 3 1 7 8 9 , 1 1 7 9 0 , 1 1 3 7 6 , 0 1 0 5 4 , 9 5 4 1 , 8 0 2 9 , 6 9 2 9 , 5 9 4 0 , 5 3 1 6 , 4 9 7 3 , 3 0 7 3 , 2 8 8 4 , 1 6 5 8 4 6 4 9 9 1 5 8 0 2 . 0 4 . 0 6 . 0 81 . 0 2 . 1 4 . 1 6 . 1 82 . 1 2 . 2 4 . 2 6 . 2 83 .
|
2306.06331#51
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 51 |
12
Arithmetic Reasoning Method Part AddSub MultiArith SingleEq Num. Ratio Acc. Num. Ratio Acc. Num. Ratio Acc. CoT-Prompting Part 1 Part 2 Part 1&2 245 150 395 62.03% 97.55 37.97% 53.33 100.00% 82.78 299 301 600 49.83% 100.00 369 50.17% 82.39 139 100.00% 93.00 508 72.64% 97.83 27.36% 51.08 100.00% 85.04 Self-Consistency Part 1 Part 2 Part 1&2 245 150 395 62.03% 97.55 37.97% 71.33 100.00% 90.63 299 301 600 49.83% 100.00 369 50.17% 87.38 139 100.00% 94.17 508 72.64% 97.83 27.36% 66.19 100.00% 89.17
Table 6: Analysis for Diversity Entropy in Filtering Stage (I). The accuracy of Part 1 is generally larger than Part 2. The result demonstrates the superiority of Diversity Entropy and experimentally confirms the intuition that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems. For each task, we report the median scores among 5 runs.
|
2306.07932#51
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 52 |
Arithmetic Reasoning Method Part SingleOp ASDiv AQuA Num. Ratio Acc. Num. Ratio Acc. Num. Ratio Acc. CoT-Prompting Part 1 Part 2 Part 1&2 423 139 562 53.53% 96.88 75.27% 98.35 1122 46.47% 42.51 24.73% 58.99 974 100.00% 94.84 2096 100.00% 73.19 48 206 254 18.90% 52.08 81.10% 37.38 100.00% 40.55 Self-Consistency Part 1 Part 2 Part 1&2 423 139 562 53.53% 96.88 75.27% 98.35 1122 24.73% 70.50 46.47% 52.78 974 100.00% 95.73 2096 100.00% 77.72 48 206 254 18.90% 52.08 81.10% 32.04 100.00% 38.19
Table 7: Analysis for Diversity Entropy in Filtering Stage (II). The accuracy of Part 1 is generally larger than Part 2. The result demonstrates the superiority of Diversity Entropy and experimentally confirms the intuition that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems. For each task, we report the median scores among 5 runs.
|
2306.07932#52
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 53 |
score is set to 100%) while CoT-Upperbound and MCS have the same upper bound in extreme cases (i.e., the threshold of Diversity Entropy score is set to 100%). The experimental results demonstrate that the upper bounds are quite high, indicating that an incorrect rationale could produce the correct final answer after correcting the errors. To note, this limitation represents only the upper bounds of our method, and its practical implementation would require significant time and resources.
# B.2 Correcting Erroneous Sub-logic Indeed Solves the Majority of Erroneous Rationale.
Correcting erroneous sub-logic indeed solves the majority of erroneous rationale. We conduct the analytical experiment across multiple tasks in Sec. 4.3 and provide the evidence.
We conduct experiments on twelve datasets to check whether correcting sub-logics solves the majority of incorrect rationales. Each task is represented by a pie chart. For each task, we conduct the error analysis for CoT prompting and analyze the error types of rationales. We divided the error types into four categories: errors that are able to be corrected by the âmodifyingâ operation, the âaddingâ operation, the âdeletingâ operation, and the rest of the errors that are unable to be manually corrected. The percentage of each type across datasets is shown in Fig. 3.
|
2306.07932#53
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 54 |
5 4 3 2 1 0 ·104 2 41364 5 9 4 , 4 5 4 9 6 , 3 5 3 7 2 , 2 5 0 9 4 , 1 5 6 1 7 , 8 4 2 2 2 , 8 4 8 0 8 , 5 4 2 3 7 , 2 4 4 5 6 , 0 4 2 3 1 , 0 4 4 6 9 , 7 3 7 5 3 , 5 3 8 0 4 , 3 3 2 9 2 , 1 3 1 2 0 , 1 3 4 3 6 , 9 2 1 5 6 , 7 2 4 0 7 , 5 2 2 1 7 , 3 2 2 3 2 , 2 2 6 9 7 , 0 2 4 0 2 , 0 2 8 2 5 , 8 1 8 9 8 , 6 1 9 5 3 , 5 1 6 6 2 , 4 1 6 6 0 , 3 1 5 9 0 , 2 1 1 8 9 , 1 1 4 2 7 , 0 1 1 6 6 , 9 3 3 5 , 8 7 0 2 , 7 5 6 9 , 5 5 1 9 , 5 3 7 3 , 4 3 2 1 , 3 0 8 9 , 1 9 2 1 , 1 8 6 5 0 6 2 9 0 1 0 2 . 0 4 . 0 6 . 0 81 . 0 2 . 1 4 . 1 6 . 1 82 . 1 2 . 2 4 . 2 6 . 2 83 . 2 2 . 3 4 . 3 6 . 3 84 . 3 2 . 4 4 .
|
2306.06331#54
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 54 |
Sec. 4.3 presents experiments in Fig. 3 on twelve datasets to check whether correcting sub-logics solves the majority of erroneous rationales. Figure 3 illustrates the error analysis of the CoT Prompting across twelve tasks. We list the detailed numbers of the error analysis in Tab. 12 and Tab. 13. Results show that correcting erroneous sub-logic indeed solves the majority of erroneous rationale (i.e., each erroneous rationale indeed can be corrected by only editing a single erroneous sub-logic).
13
|
2306.07932#54
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 55 |
13
Arithmetic Reasoning Commonsense Reasoning Method Part SVAMP GSM8K CSQA Num. Ratio Acc. Num. Ratio Acc. Num. Ratio Acc. CoT-Prompting 64.86% 85.98 Part 1 35.14% 47.09 Part 2 Part 1&2 1000 100.00% 68.00 1319 100.00% 56.48 1221 100.00% 72.32 43.80% 92.92 256 56.20% 47.86 1063 19.41% 93.36 80.59% 47.70 438 562 792 429 Self-Consistency 64.86% 85.98 Part 1 35.14% 57.81 Part 2 Part 1&2 1000 100.00% 75.70 1319 100.00% 58.85 1221 100.00% 76.09 438 562 43.80% 92.92 256 56.20% 62.46 1063 19.41% 93.36 80.59% 50.71 792 429
Table 8: Analysis for Diversity Entropy in Filtering Stage (III). The accuracy of Part 1 is generally larger than Part 2. The result demonstrates the superiority of Diversity Entropy and experimentally confirms the intuition that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems. For each task, we report the median scores among 5 runs.
|
2306.07932#55
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 56 |
Commonsense Reasoning Symbolic Reasoning Method Part StrategyQA Letter (4) Coinflip (4) Num. Ratio Acc. Num. Ratio Acc. Num. Ratio Acc. CoT-Prompting 65.88% 66.31 Part 1 34.12% 48.59 Part 2 Part 1&2 2280 100.00% 60.13 1502 778 175 325 500 38.40% 98.70 35.00% 72.00 384 65.00% 36.31 61.60% 69.48 616 100.00% 49.20 1000 100.00% 81.40 Self-Consistency 65.88% 66.31 Part 1 34.12% 52.57 Part 2 Part 1&2 2280 100.00% 61.40 1502 778 175 325 500 38.40% 98.70 35.00% 72.00 384 65.00% 44.62 61.60% 89.61 616 100.00% 54.40 1000 100.00% 93.20
Table 9: Analysis for Diversity Entropy in Filtering Stage (IV). The accuracy of Part 1 is generally larger than Part 2. The result demonstrates the superiority of Diversity Entropy and experimentally confirms the intuition that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems. For each task, we report the median scores among 5 runs.
|
2306.07932#56
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 57 |
t n e d u t S f o r e b m u N 6 5 4 3 2 1 0 ·104 2 21243 0 4 6 , 6 5 0 3 1 , 6 5 9 6 0 , 4 5 8 0 3 , 3 5 5 4 2 , 1 5 0 8 3 , 7 4 1 3 2 , 7 4 5 0 7 , 4 4 6 8 5 , 1 4 9 9 2 , 9 3 4 4 7 , 7 3 0 0 1 , 7 3 2 5 6 , 4 3 1 5 3 , 2 3 8 4 6 , 0 3 0 9 4 , 8 2 7 3 5 , 7 2 7 6 6 , 6 2 7 8 0 , 5 2 6 4 1 , 3 2 8 6 7 , 1 2 1 2 1 , 0 2 5 0 7 , 8 1 4 3 5 , 8 1 3 2 0 , 7 1 9 3 8 , 5 1 1 8 5 , 4 1 4 2 2 , 3 1 9 9 4 , 1 1 3 2 9 , 0 1 9 4 0 , 0 1 0 1 3 , 8 6 6 2 , 6 0 5 8 , 5 8 9 4 , 4 4 8 0 , 3 2 7 6 , 2 8 2 9 , 1 0 8 0 , 1 5 5 0 , 1 9 3 5 8 4 2 2 8 7 3 3 9 8 0 2 . 0 4 . 0 6 . 0 81 . 0 2 . 1 4 . 1 6 . 1 82 . 1 2 . 2
|
2306.06331#57
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 57 |
B.3 Correcting Each Sub-logics Independently is Much Easier and More User-friendly than Correcting the Entire Rationale
We conduct the human evaluation. The questionnaire survey shows that correcting each sub-logic independently (i.e., our approach) is much easier and more user-friendly than checking the entire rationale. We present the time that humans need to check and correct the incorrect sub-logics compared to correcting the entire rationale as Tab. 14 and Tab. 15.
The result presents the average time (seconds) needed for a human to check and correct the incorrect sub-logics compared to correcting the entire rationale for each sample. The time humans need to
Model Arithmetic Reasoning AddSub MultiArith SingleEq SingleOp ASDiv AQuA SVAMP GSM8K CoT-Prompting CoT-Upperbound 82.78 97.72 93.00 96.33 85.04 94.09 94.84 96.80 73.19 75.62 40.55 47.64 68.00 77.50 56.48 63.76 Self-Consistency SC-Upperbound 90.63 98.48 94.17 96.33 89.17 95.67 95.73 98.93 77.72 81.58 38.19 44.49 75.70 82.00 58.85 64.67
|
2306.07932#57
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.07932
| 58 |
Table 10: Upperbound Analysis of CoT-Prompting, Self-Consistency and MCS (I). The experimental results demonstrate that the upper bounds are quite high, indicating that an incorrect rationale could produce the correct final answer after correcting the errors. To note, this limitation represents only the upper bounds of our method, and its practical implementation would require significant time and resources. For each task, we report the median scores among 5 runs.
14
Model Commonsense Symbolic CSQA StraQA Letter Coinflip CoT-Prompting 72.32 CoT-Upperbound 74.61 60.13 60.88 49.20 93.80 81.40 81.40 Self-Consistency 76.09 77.97 SC-Upperbound 61.40 62.23 54.40 96.00 93.20 93.20
Table 11: Upperbound Analysis of CoT-Prompting, Self-Consistency and MCS (II). The experimental results demonstrate that the upper bounds are quite high, indicating that an incorrect rationale could produce the correct final answer after correcting the errors. To note, this limitation represents only the upper bounds of our method, and its practical implementation would require significant time and resources. For each task, we report the median scores among 5 runs.
|
2306.07932#58
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 59 |
Figure 14: Mathematics score spectrum of Vietnamese students in 2023.
# 5 Discussion
While ChatGPT has certain limitations in the field of mathematics [26],[29], [30], it has the potential to be a beneficial resource for educators and learners in the field of education[31], [32]. Nevertheless, ChatGPT must continue to prove its ability to earn trust. Therefore, we need to have in-depth and detailed studies of its capabilities in areas, like mathematics. The findings of this study demonstrate that ChatGPT, a big language model trained by OpenAI, is capable of solving math issues to a certain extent but still has difficulties comprehending and interpreting graphical data in test questions. Less than the typical success rate of Vietnamese students taking the same exam, ChatGPTâs total success rate in the VNHSGE exam ranged from 52% to 66%. This shows that ChatGPTâs capacity to tackle mathematical issues still needs to be enhanced.
|
2306.06331#59
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 59 |
Arithmetic Reasoning Operation AddSub MultiArith SingleEq SingleOp ASDiv AQuA Num. Ratio Num. Ratio Num. Ratio Num. Ratio Num. Ratio Num. Ratio Modifying Adding Deleting 33 0 0 92% 22 0% 10 0% 0 24% 3 11% 0 0% 7 11% 19 0% 19 25% 0 28% 15 28% 38 0% 0 4% 2 10% 16 0% 0 1% 16% 0% Unable 3 8% 60 65% 18 64% 30 44% 327 86% 132 88%
Table 12: Detailed numbers of the error analysis (I). The results are the detailed numbers of Fig. 3.
check and correct the incorrect sub-logics is much less than the time needed to correct the entire rationale for each sample, proving that correcting each sub-logic independently is much easier and more user-friendly for humans than checking the entire rationale.
# C Inference for CAMLOP
Given a model parameterized by c, d, and a fixed cost y, the model predicts the optimal choice pxË 2 q with the highest utility, which is desired by the company strategic decision-makers. Note an important feature of this optimal choice: at this data point (namely, optimal choice point) the indifference curve is tangent to p1x1 ` p2x2 â y. According to this feature, the inference is to get pxË
|
2306.07932#59
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 60 |
Further examination of ChatGPTâs performance in resolving mathematical problems revealed that its success rate varied based on the level of difficulty and topic of the problems. The questions at the K-level had the greatest ChatGPT success rate, indicating a fundamental comprehension of the topic in question. However, the ChatGPT success rate significantly decreased as the question difficulty increased. This shows that ChatGPT has trouble solving more difficult math problems, particularly those that are at the H-level. Additionally, ChatGPTâs performance varied depending on the topic. This conclusion suggests that ChatGPTâs current iteration has limits in its capacity to understand mathematical ideas that call for the use of visual reasoning or the interpretation of graphical data. Future development should focus on ChatGPTâs shortcomings in comprehending graphical information in test questions. This constraint could be overcome by creating algorithms and models that enable ChatGPT to read and evaluate visual data, which is crucial for resolving many mathematical issues. In summary, ChatGPT performs inconsistently across various topics and difficulty levels, although showing promising results when solving mathematical inquiries. ChatGPTâs comprehension of intricate mathematical ideas, particularly those using graphical data, requires more refinement.
|
2306.06331#60
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 60 |
u1pxË 1 , xË 2 q â ´ p1 p2 (3)
which will derive the optimal choice pxË
1 , xË
2 q:
xË 1 â c c ` d m p1 , xË 2 â d c ` d m p2 (4)
# D Learning for CAMLOP
We have seen how to make the best decision based on the inference of CAMLOP. But in real life we have to work the other way around: we observe some historical cost and utility datapoints, but our problem is to estimate what kind of utility function is induced from the observations.
Concretely, suppose that we observe a number of industries making choices between LLMs and human workers based on their considerations of commute times, money costs, accuracy, etc. There exists an analytic solution of c, d obtained by statistical techniques that best fit the observed data points. In this way, the historical datapoints give a way to estimate the utility function. More specifically, we use regression analysis to find the utility function that best describes the relation between x and utility. Mean square error is typically employed as the loss function for learning the utility function. The
15
|
2306.07932#60
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 61 |
In our study, we compared how well ChatGPT performed in a number of well-known math competitions, including SAT Math, VNHSGE mathematics, AP Statistics, GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. The degree of difficulty, the format, and the nature of the questions employed in these contests all differ. With a 70% success rate, ChatGPT had the highest success rate in the SAT Math competition, which is not surprising considering that the SAT Math test primarily evaluates high school math proficiency. The ChatGPT success rate for the VNHSGE Mathematics, on the other hand, was 58.8%. It is a more thorough test that covers a wider range of math topics and difficulty levels. It
15
2 1
0 1
|
2306.06331#61
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 61 |
15
Arithmetic Reasoning Commonsense Reasoning Symbolic Reasoning Operation SVAMP GSM8K CSQA StraQA Letter (4) Conflip (4) Num. Ratio Num. Ratio Num. Ratio Num. Ratio Num. Ratio Num. Ratio Modifying Adding Deleting 41 19 35 13% 54 6% 11 11% 25 10% 28 2% 0 4% 0 8% 39 0% 0 0% 0 36% 223 0% 0 0% 0 88% 0 0% 0 0% 0 0% 0% 0% Unable 225 70% 478 84% 310 92% 69 64% 30 12% 186 100%
Table 13: Detailed numbers of the error analysis (II). The results are the detailed numbers of Fig. 3.
Human Operation Arithmetic Reasoning 21s 49s 24s 80s 30s 60s 14s 32s 26s 44s 62s 102s 16s 48s 45s 77s
Table 14: Time (seconds) spent for correcting the incorrect sub-logics compared to correcting the entire rationale (I). The time humans need to check and correct the incorrect sub-logics is much less than the time needed to correct the entire rationale for each sample, proving that correcting each sub-logic independently is much easier and more user-friendly for humans than checking the entire rationale.
loss function is defined on J training datapoints X â tpxp1q
|
2306.07932#61
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 62 |
15
2 1
0 1
is important to note that, as was mentioned in our earlier investigation, ChatGPT performed better in some areas than others. With success rates of 25% and 1%, respectively, in the GRE Quantitative and AP Calculus BC competitions, ChatGPT performed much worse. These contests are renowned for their high degree of complexity and difficulty, with questions that call for highly developed problem-solving abilities and a thorough comprehension of mathematical ideas. These types of challenges are difficult for ChatGPT to understand and analyze, which underlines the shortcomings of current language models. Overall, our analysis of ChatGPTâs performance in several math competitions reveals the advantages and disadvantages of the present language models for math problem-solving. Even though language models like ChatGPT have advanced significantly in recent years, they still have difficulties processing graphical data, comprehending intricate mathematical ideas, and working out difficult mathematical problem. The goal of future study could be to overcome these constraints and improve language modelsâ capacity for mathematical problem solving.
# 6 Conclusion
In this study, we assessed how well ChatGPT performed when it came to answering mathematics issues of various levels and topics. The findings revealed that ChatGPT performed poorly in some topics and levels while performing well in others. At Level K, ChatGPT correctly answered 83% of the questions, whereas at Levels C, A, and H, the accuracy rate dropped to 62%, 27%, and 10%, respectively.
|
2306.06331#62
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.06331
| 63 |
Additionally, the accuracy rates of ChatGPT varied depending on the topic, with M11B, M12B, M11A, and M12D having the highest rates and M12A, M11C, and M12G having the lowest rates. Itâs crucial to highlight that ChatGPT had difficulty with issues requiring graphical interpretation because it couldnât read and comprehend the images, which led to a poor accuracy rate for queries about derivatives and applications.
Furthermore, ChatGPT math scores were consistently lower than those of Vietnamese students in the same years. This might be as a result of the language modelâs reliance on pre-existing data and algorithms, as well as its failure to comprehend the context and nuances of the Vietnamese language.
In conclusion, ChatGPT had potential in resolving mathematical issues, but its effectiveness was constrained by elements like graphical interpretation and language understanding. Future studies might concentrate on addressing these limitations and investigating the possibilities of language models in math education.
# References
[1] Jianxing He, Sally L Baxter, Jie Xu, Jiming Xu, Xingtao Zhou, and Kang Zhang. The practical implementation of artificial intelligence technologies in medicine. Nature medicine, 25(1):30â36, 2019.
[2] Lijia Chen, Pingping Chen, and Zhijian Lin. Artificial intelligence in education: A review. Ieee Access, 8:75264â 75278, 2020.
|
2306.06331#63
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 63 |
where the model parameters are c, d. A normal equation or gradient descent can be used to optimize this loss function and obtain the final c, d.
# E Experiment Details
We choose GPT-3 because of its superior CoT reasoning performance, as reported in the work of Wei et al. [2022] and Wang et al. [2022]. Due to the limited context window size (up to 4096 word-pieces for the GPT-3 series of models), we use an 8-shot setting for all datasets. Our experiments are based on access to the OpenAI GPT-3 API. We perform all experiments in the few-shot setting, without training or fine-tuning the LLM. For a fair comparison, we use the same prompts as in the work of Wei et al. [2022]. For arithmetic reasoning tasks, we use the same set of 8 manually written exemplars. For commonsense reasoning tasks, exemplars are randomly selected from the training set with manually written CoT prompts.
We list the exact set of prompts used for all arithmetic reasoning tasks in Tab. 16, since there are multiple sets of prompts introduced in Wei et al. [2022]. The prompts for CommonsenseQA and StrategyQA are the same as used in Wei et al. [2022].
|
2306.07932#63
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 64 |
[3] Bill Cope, Mary Kalantzis, and Duane Searsmith. Artificial intelligence for education: Knowledge and its assessment in ai-enabled learning ecologies. Educational Philosophy and Theory, 53(12):1229â1245, 2021.
[4] Xuan-Quy Dao, Ngoc-Bich Le, and Thi-My-Thanh Nguyen. Ai-powered moocs: Video lecture generation. In 2021 3rd International Conference on Image, Video and Signal Processing, pages 95â102, 2021.
[5] Thi-My-Thanh Nguyen, Thanh-Hai Diep, Bac-Bien Ngo, Ngoc-Bich Le, and Xuan-Quy Dao. Design of online learning platform with vietnamese virtual assistant. In 2021 6th International Conference on Intelligent Information Technology, pages 51â57, 2021.
[6] Raju Vaishya, Mohd Javaid, Ibrahim Haleem Khan, and Abid Haleem. Artificial intelligence (ai) applications for covid-19 pandemic. Diabetes & Metabolic Syndrome: Clinical Research & Reviews, 14(4):337â339, 2020.
[7] Shanshan Gao. Innovative teaching of integration of artificial intelligence and university mathematics in big data environment. In IOP Conference Series: Materials Science and Engineering, volume 750, page 012137. IOP Publishing, 2020.
|
2306.06331#64
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 64 |
Human Operation Commonsense Symbolic CSQA StraQA Letter Coinflip Correcting sub-logics Correcting entire rationale 24s 36s 18s 28s 36s 40s
14s 26s Table 15: Time (seconds) spent for correcting the incorrect sub-logics compared to correcting the entire rationale (II). The time humans need to check and correct the incorrect sub-logics is much less than the time needed to correct the entire rationale for each sample, proving that correcting each sub-logic independently is much easier and more user-friendly for humans than checking the entire rationale.
16
|
2306.07932#64
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 65 |
[8] Stefan AD Popenici and Sharon Kerr. Exploring the impact of artificial intelligence on teaching and learning in higher education. Research and Practice in Technology Enhanced Learning, 12(1):1â13, 2017.
[9] Ke Zhang and Ayse Begum Aslan. Ai technologies for education: Recent research & future directions. Computers and Education: Artificial Intelligence, 2:100025, 2021.
[10] Olaf Zawacki-Richter, Victoria I MarÃn, Melissa Bond, and Franziska Gouverneur. Systematic review of research on artificial intelligence applications in higher educationâwhere are the educators? International Journal of Educational Technology in Higher Education, 16(1):1â27, 2019.
16
[11] Mostafa Zafari, Jalal Safari Bazargani, Abolghasem Sadeghi-Niaraki, and Soo-Mi Choi. Artificial intelligence applications in k-12 education: A systematic literature review. IEEE Access, 2022.
[12] Francesc Pedro, Miguel Subosa, Axel Rivas, and Paula Valverde. Artificial intelligence in education: Challenges and opportunities for sustainable development. 2019.
|
2306.06331#65
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 65 |
Q: There are 15 trees in the grove. Grove workers will plant trees in the grove today. After they are done, there will be 21 trees. How many trees did the grove workers plant today? A: We start with 15 trees. Later we have 21 trees. The difference must be the number of trees they planted. So, they must have planted 21 - 15 = 6 trees. The answer is 6. Q: If there are 3 cars in the parking lot and 2 more cars arrive, how many cars are in the parking lot? A: There are 3 cars in the parking lot already. 2 more arrive. Now there are 3 + 2 = 5 cars. The answer is 5. Q: Leah had 32 chocolates and her sister had 42. If they ate 35, how many pieces do they have left in total? A: Leah had 32 chocolates and Leahâs sister had 42. That means there were originally 32 + 42 = 74 chocolates. 35 have been eaten. So in total they still have 74 - 35 = 39 chocolates. The answer is 39. Q: Jason had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny?
|
2306.07932#65
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 66 |
[12] Francesc Pedro, Miguel Subosa, Axel Rivas, and Paula Valverde. Artificial intelligence in education: Challenges and opportunities for sustainable development. 2019.
[13] Sayed Fayaz Ahmad, Mohd Khairil Rahmat, Muhammad Shujaat Mubarik, Muhammad Mansoor Alam, and Syed Irfan Hyder. Artificial intelligence and its role in education. Sustainability, 13(22):12902, 2021.
[14] Seungsu Paek and Namhyoung Kim. Analysis of worldwide research trends on the impact of artificial intelligence in education. Sustainability, 13(14):7941, 2021.
[15] Lanqin Zheng, Jiayu Niu, Lu Zhong, and Juliana Fosua Gyasi. The effectiveness of artificial intelligence on learning achievement and learning perception: A meta-analysis. Interactive Learning Environments, pages 1â15, 2021.
[16] Adam Gamoran and Eileen C Hannigan. Algebra for everyone? benefits of college-preparatory mathematics for students with diverse abilities in early secondary school. Educational Evaluation and Policy Analysis, 22(3):241â254, 2000.
[17] Robert Parris Moses, Charles E Cobb, et al. Radical equations: Math literacy and civil rights. Technical report, Beacon Press, 2002.
|
2306.06331#66
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 66 |
had 20 lollipops. He gave Denny some lollipops. Now Jason has 12 lollipops. How many lollipops did Jason give to Denny? A: Jason had 20 lollipops. Since he only has 12 now, he must have given the rest to Denny. The number of lollipops he has given to Denny must have been 20 - 12 = 8 lollipops. The answer is 8. Q: Shawn has five toys. For Christmas, he got two toys each from his mom and dad. How many toys does he have now? A: He has 5 toys. He got 2 from mom, so after that he has 5 + 2 = 7 toys. Then he got 2 more from dad, so in total he has 7 + 2 = 9 toys. The answer is 9. Q: There were nine computers in the server room. Five more computers were installed each day, from monday to thursday. How many computers are now in the server room? A: There are 4 days from monday to thursday. 5 computers were added each day. That means in total 4 * 5 = 20 computers were added. There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers. The answer is 29. Q: Michael had 58 golf balls. On tuesday,
|
2306.07932#66
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 67 |
[17] Robert Parris Moses, Charles E Cobb, et al. Radical equations: Math literacy and civil rights. Technical report, Beacon Press, 2002.
[18] Mohamed Zulhilmi bin Mohamed, Riyan Hidayat, Nurain Nabilah binti Suhaizi, Muhamad Khairul Hakim bin Mahmud, Siti Nurshafikah binti Baharuddin, et al. Artificial intelligence in mathematics education: A systematic literature review. International Electronic Journal of Mathematics Education, 17(3):em0694, 2022.
[19] Sunghwan Hwang. Examining the effects of artificial intelligence on elementary studentsâ mathematics achieve- ment: A meta-analysis. Sustainability, 14(20):13185, 2022.
|
2306.06331#67
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 67 |
There were 9 computers in the beginning, so now there are 9 + 20 = 29 computers. The answer is 29. Q: Michael had 58 golf balls. On tuesday, he lost 23 golf balls. On wednesday, he lost 2 more. How many golf balls did he have at the end of wednesday? A: Michael initially had 58 balls. He lost 23 on Tuesday, so after that he has 58 - 23 = 35 balls. On Wednesday he lost 2 more so now he has 35 - 2 = 33 balls. The answer is 33. Q: Olivia has $23. She bought five bagels for $3 each. How much money does she have left? A: She bought 5 bagels for $3 each. This means she spent 5 * $3 = $15 on the bagels. She had $23 in beginning, so now she has $23 - $15 = $8. The answer is 8.
|
2306.07932#67
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 68 |
[20] Mohanad Halaweh. Chatgpt in education: Strategies for responsible implementation. 2023. [21] Xiaoming Zhai. ChatGPT User Experience: Implications for Education. SSRN Electronic Journal, 2023. [22] Enkelejda Kasneci, Kathrin SeÃler, Stefan K¨uchemann, Maria Bannert, Daryna Dementieva, Frank Fischer, Urs Gasser, Georg Groh, Stephan G¨unnemann, Eyke H¨ullermeier, et al. Chatgpt for good? on opportunities and challenges of large language models for education. Learning and Individual Differences, 103:102274, 2023. [23] Daniel Martin Katz, Michael James Bommarito, Shang Gao, and Pablo Arredondo. Gpt-4 passes the bar exam.
Available at SSRN 4389233, 2023.
[24] Aidan Gilson, Conrad W Safranek, Thomas Huang, Vimig Socrates, Ling Chi, Richard Andrew Taylor, David Chartash, et al. How does chatgpt perform on the united states medical licensing examination? the implications of large language models for medical education and knowledge assessment. JMIR Medical Education, 9(1):e45312, 2023.
|
2306.06331#68
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 68 |
# Table 16: Few-shot exemplars for arithmetic reasoning tasks.
# F Diversity Metrics Over Diverse Reasoning Paths
As described in Sec. 4.4, the majority vote method of calculating the answer probability over all sampled rationales can be regarded as taking an unnormalized unweighted sum. As described in Wang et al. [2022], other methods of computing answer probability of a include the unnormalized weighted average, normalized weighted average, unnormalized weighted sum, and normalized weighted sum. Tab. 4 shows that unnormalized unweighted sum generally outperforms others. We use this setting in all experiments following Wang et al. [2022].
In practice, the majority vote method of calculating the answer probability over all sampled rationales |N | proposed at Eq. 1 is the same as taking the unweighted sum over ai (i.e., iâ11pai â aq), where |N | denotes the number of answers (i.e., the number of sampling times). As described in Wang et al. [2022], another selection of computing answer probability of a over all sampled rationales is to use unnormalized probability pai of the language model generating ai given the prompt of sample s:
pai â P pri, ai | sq (6)
|
2306.07932#68
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
2306.06331
| 69 |
[25] JP Carrasco, E GarcÃa, DA Sánchez, PD Estrella Porter, L De La Puente, J Navarro, and A Cerame. Is" chatgpt" capable of passing the 2022 mir exam? implications of artificial intelligence in medical education in spain?â es capaz âchatgptâ de aprobar el examen mir de 2022? implicaciones de la inteligencia artificial en la educación.
[26] Simon Frieder, Luca Pinchetti, Ryan-Rhys Griffiths, Tommaso Salvatori, Thomas Lukasiewicz, Philipp Chris- tian Petersen, Alexis Chevalier, and Julius Berner. Mathematical capabilities of chatgpt. arXiv preprint arXiv:2301.13867, 2023.
|
2306.06331#69
|
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
|
This study offers a complete analysis of ChatGPT's mathematics abilities in
responding to multiple-choice questions for the Vietnamese National High School
Graduation Examination (VNHSGE) on a range of subjects and difficulty levels.
The dataset included 250 questions divided into four levels: knowledge (K),
comprehension (C), application (A), and high application (H), and it included
ten themes that covered diverse mathematical concepts. The outcomes demonstrate
that ChatGPT's performance varies depending on the difficulty level and
subject. It performed best on questions at Level (K), with an accuracy rate of
$83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy
rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in
providing responses to questions on subjects including exponential and
logarithmic functions, geometric progression, and arithmetic progression. The
study found that ChatGPT had difficulty correctly answering questions on topics
including derivatives and applications, spatial geometry, and Oxyz spatial
calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese
students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT
Math competition with a success rate of $70\%$, followed by VNHSGE mathematics
($58.8\%)$. However, its success rates were lower on other exams, such as AP
Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These
results suggest that ChatGPT has the potential to be an effective teaching tool
for mathematics, but more work is needed to enhance its handling of graphical
data and address the challenges presented by questions that are getting more
challenging.
|
http://arxiv.org/pdf/2306.06331
|
Xuan-Quy Dao, Ngoc-Bich Le
|
cs.CL, cs.LG
|
17 pages, 14 images
| null |
cs.CL
|
20230610
|
20231031
|
[
{
"id": "2303.08774"
},
{
"id": "2301.13867"
},
{
"id": "2305.12199"
},
{
"id": "2302.03494"
}
] |
2306.07932
| 69 |
pai â P pri, ai | sq (6)
Then we use all unnormalized probability pai given by the language modelâs decoder to calculate the probability pa of the answer a for sample s:
# Å
pa â |N | iâ11pai â aqpai |N | (7)
where |N | denotes the number of rationales decoded for the sample s. The result of using the calculation output of Eq. 7 as the probability of answer a is shown in Tab. 4 as Unnormalized Weighted Sum . Apart from computing pa by taking the unnormalized probability of the language model generating pri, aiq given s, we can normalize the output probability for pri, aiq by the output length of ri [Brown et al., 2020b]:
17
# Å
pai â exp 1 K K kâ1 log ptk (8)
where ptk is the log probability of generating the k-th token tk in pri, aiq conditioned on the previous tokens, and K is the total number of tokens in pri, aiq:
ptk â P ptk | s, t1, . . . , tk´1q (9)
The result of using the calculation output of Eq. 8 as the normalized probability pa model generating ai given prompt of sample s is shown in Tab. 4 as Normalized Weighted Sum.
|
2306.07932#69
|
Human-in-the-Loop through Chain-of-Thought
|
While the emergence of powerful language models along with Chain-of-thought
prompting has made automation more and more omnipresent, it sometimes
demonstrates its weakness in long-term or multi-step logical reasoning. For
example, users don't always get desirable answers for complex mathematical
problems without human involvement. Against this background, we present the
Manual Correction System (MCS) -- a human-in-the-loop system enhanced by
Chain-of-Thought prompting, which explores how manual correction of sub-logics
in rationales can improve LLM's reasoning performance. Moving one step forward,
considering a system with human-in-the-loop involves more than having humans
improve performance but also controlling the cost. Therefore, we post a
Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on
classical economics theory to analyze, quantify and balance the utility and the
corresponding cost. We conduct experiments of MCS and CAMLOP with twelve
datasets. A significant advantage w.r.t cost and utility proves its superiority
over strong baselines.
|
http://arxiv.org/pdf/2306.07932
|
Zefan Cai, Baobao Chang, Wenjuan Han
|
cs.CL, cs.AI
| null | null |
cs.CL
|
20230610
|
20230623
|
[
{
"id": "1904.09751"
},
{
"id": "2110.08207"
},
{
"id": "2206.04615"
},
{
"id": "2106.15772"
},
{
"id": "2110.14168"
},
{
"id": "1805.06087"
},
{
"id": "1608.01413"
},
{
"id": "1707.02633"
},
{
"id": "2203.02155"
},
{
"id": "2103.07191"
},
{
"id": "1805.04833"
},
{
"id": "2201.11903"
},
{
"id": "1905.13319"
},
{
"id": "2203.11171"
},
{
"id": "2205.01068"
},
{
"id": "2205.11916"
},
{
"id": "1811.00937"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.