doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2306.06770
88
Table 13: Extended summary of measures/condition for the “organize office” experiments. • Number of Retrieved Goals: This chart compares how many goal descriptions are retrieved from the LLM. In the TBP conditions, relatively few goal descriptions are produced (∼90, or about 2.6 descriptions/object). With the ST conditions, many more goals are retrieved (∼245) due to beam search. In the STAR+ conditions, about 365 goals are retrieved. The increase of about 120 goal retrievals represents the additional LLM retrievals being performed by beam search as part of Analysis and Repair.
2306.06770#88
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
89
• Total Goals Presented to User: This chart illustrates the number of retrieved goals presented to the user (both charts share the same horizontal axis). In the TBP+O condition, 64 of the 89 retrieved goals are presented to the user (and only 21 are eventually used by the robot). In the STARS+O condition, slightly fewer goals are presented (51) from the total of 361 goals retrieved and one goal is used for each object (35 sourced goals). This result highlights the while the retrieval process is much broader for STARS than for TBD, the search and evaluation processes result in greater overall precision in identifying acceptable goal descriptions, requiring fewer user evaluations and producing a higher acceptance rate when a goal needs to be confirmed (oversight). Figure 11 presents a summary of key results for the “store groceries” task. Details for “store groceries” for measures not discussed in the main body of the paper are as follows. • Total number of instructions: Total number of instructions decreases in the STARS oversight condition in comparison to TBP. 22 interactions are needed, but 16 of these interactions are proposed goals that require yes/no responses and 15 of these are accepted (94% acceptance rate, as in the lower right chart). In the STARS+O condition, at least one acceptable goal condition was generated by the LLM for each object in the data set.
2306.06770#89
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
90
• Number of Retrieved Goals: In the TBP conditions, few goal descriptions are produced (39, or 2.6 descriptions per object on average). With the ST conditions, many more goals are retrieved (96). In the STAR+ conditions, 170-177 goals are retrieved. The increase of ∼80 goal retrievals is due to additional LLM retrievals from beam search using during repairs of Analysis and Repair. Task Completion Rate Total Number of Instructions ep TaP+O © st S = sts 2 8 TRI ee STARS STARS+0 00 O02 04 O06 O08 10 t) 20 40 60 80 100 Total Number of Retrieved Goals Total Number of Instructor Words Tap TeP+O © st S 2 ss 2 as a STARS stans+0 7 — 0) 100 200 300 400 t) 100 200 300 400 500 Total # of Goals Presented to User _ Fraction of Accepted Yes/No Responses 5 TeP+o 3 ) 100 200 300 400 «0.0 0.2 0.4 0.6 08 10 Figure 10: Expanded panel of summary results from the “tidy kitchen” experiment.
2306.06770#90
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
91
Figure 10: Expanded panel of summary results from the “tidy kitchen” experiment. • Total Goals Presented to User: In the TBP+O condition, 21 of the 37 retrieved goals are presented to the user (and only 13 are used by the robot). In the STARS+O condition, slightly fewer goals are presented (16) from the total of 177 goals retrieved and one goal is used for each object (15 sourced goals). This result highlights again that the Search Tree and Analysis processes result in greater overall precision in identifying acceptable goal descriptions, requiring fewer user evaluations and generating a higher acceptance rate when goals need to be confirmed (using oversight). Figure 12 presents a summary of key results for the “organize office” task. Details for “organize office” are as follows. • Total number of instructions: As with the other tasks, the total number of instructions decreases in the STARS oversight condition compared to TBP. With STARS 22 interactions are needed, but 15 of these interactions are goal proposals that require yes/no responses and 11 of these are accepted (73% acceptance rate, as in the lower right chart).
2306.06770#91
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
92
• Number of Retrieved Goals: In the TBP conditions, as shown in other tasks, relatively few goal descriptions are produced (34, or 2.8 descriptions per object). With the ST conditions, many more goals are retrieved (95) from the beam search. In the STAR+ conditions, ∼205 goals are retrieved. Again, the increase of goal retrievals (∼110) is due to the additional LLM retrievals being performed by beam search as part of Analysis and Repair.
2306.06770#92
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
93
Total Goals Presented to User: In the TBP+O condition, 28 of the 35 retrieved goals are presented to the user, but only 5 are used by the robot. In the STARS+O condition, fewer goals are presented (15) from the total of 206 goals retrieved and almost one goal is used for each object (11 sourced goals). The user had to be queried for a goal for one of the objects. As showed with the other tasks, the retrieval process is much broader for STARS than for TBP, but the ST and AR processes result in greater overall precision in identifying acceptable goal descriptions, requiring fewer user evaluations and creating a higher acceptance rate with oversight. Figure 13 shows the trade off between the costs (words and tokens) and performance (task completion) and highlights the relative contributions of the components of the STARS strategy for the three tasks. Figure 13a show the trade off for the “tidy kitchen” task. For this tasks, Search Tree (ST) and Analysis and Repair (AR) have the largest impact on token cost. The benefits in performance are not observed until adding Analysis and Repair that down-selects from the now larger space of responses. The figure also shows that STARS greatly reduces the human cost in words (while increasing token costs), and shows that Selection doesn’t have an appreciable impact on performance for this task.
2306.06770#93
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
94
Figure 13b shows the cost/performance trade off for the “store groceries” task. For this task, Search Tree has a smaller impact on token cost. Adding Analysis and Repair (AR) has a larger impact on token cost, but as before, increases performance significantly. The figure shows again that STARS greatly reduces the human cost in words (while increasing token costs), but in this case Selection does have an appreciable impact on performance. Task Completion Rate Total Number of Instructions ep TaP+O © st S = sts 2 8 TRI js STARS STARS+0 00 O02 04 O06 O08 10 t) 20 40 60 80 100 Total Number of Retrieved Goals Total Number of Instructor Words Tap TeP+O © st S 2 ss 2 8 STAR i. STARS STARS+0 ~| St a 0) 100 200 300 400 t) 100 200 300 400 500 Total # of Goals Presented to User _ Fraction of Accepted Yes/No Responses 5 TeP+o s 3 3 B sunseo fl 5 | ) 100 200 300 400 = ° 0.2 0.4 0.6 08 10 Figure 11: Performance and user cost measures for experimental conditions for “store groceries” task.
2306.06770#94
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
95
Figure 11: Performance and user cost measures for experimental conditions for “store groceries” task. Figure 13c shows the the cost/performance trade off for the “organize office task” task. For this task, Search Tree has a compartively larger impact on token cost, while Adding Analysis and Repair (AR) has a much larger impact. As shown in the other tasks, AR increases performance by a large amount. The figure shows again that STARS greatly reduces the human cost in words, and as with the “store groceries” tasks, Selection has an large impact on performance, showing an increase from 64% (STAR) to 93% (STARS). Figure 14 shows for each condition for the “tidy kitchen” task, the number of objects (out of 35) for which the robot retrieved at least one situationally relevant response from the LLM. While only retrieving situationally responses for 15 objects in the baseline, STARS results in 100% of the objects having situationally relevant responses, largely due to the Search Tree and Analysis and Repair. This chart illustrates that the STARS strategy is successful at generating situationally relevant responses from the robot, even if those responses are not always selected first by the robot.
2306.06770#95
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
96
Figure 15 shows the token cost (from prompts and generation) for each experimental condition for the “tidy kitchen” task, showing the tokens used per object (left) and the tokens used for each prompt type. Some objects, particularly in the conditions with analyze and repair, result in many more tokens being used. The types of prompts (in order left to right) include the initial prompt, recursive (prompts used for the Search Tree beam search), repair (prompts using during Analysis and Repair), repair/recurse (prompt used for beam search during repair), and selection (prompt used for LLM Selection over candidates). Based on the condition, only certain types of prompts are used. Figures 16 and 17 shows the token cost (from prompts and generation) for each experimental condition for the “store groceries” task. The results for these tasks are consistent with the “tidy kitchen” task.
2306.06770#96
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
97
Figure 18 shows the categorization of LLM responses according to viability, reasonableness, and situational relevance for every experimental condition for the “tidy kitchen” task. As outlined in the paper, the distribution of responses in the ST- AR-S conditions are quite similar, in contrast to the baseline conditions (TBP and TBP+O) which reveal a different pattern. The baseline conditions show more situationally relevant responses by percentage, but many fewer responses are retrieved in these conditions. STARS results in an increase in the total number of situationally relevant responses retrieved, at the cost of generating more unviable responses (by percentage) overall. Figure 19 shows the categorization of LLM responses according to viability, reasonableness, and situational relevance for every experimental condition for the “store groceries” task. The distributions of responses are similar to that from the “tidy kitchen” tasks, but with an increase across conditions of the percentage of situationally relevant responses and a decrease across conditions in the percentage of not viable responses. This is likely due to the task being simpler than “tidy kitchen.” Figure 20 shows the categorization of LLM responses according to viability, reasonableness, and situational relevance for every experimental condition for the “organize office” task. The distributions of responses, compared to the prior two tasks,
2306.06770#97
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
98
Task Completion Rate Total Number of Instructions ep TaP+o + c sT s = sts = 8 Se STARS STARS+0 00 O02 04 O06 O08 10 t) 20 40. 60 80 100 Total Number of Retrieved Goals Total Number of Instructor Words Tap TeP+0 4 s st s 2 ss 5 8 STAR STARS STARS+0 0 100 200 300 400 t) 100 200 300 400 500 Total # of Goals Presented to User _ Fraction of Accepted Yes/No Responses § Tae+o s § stars+o 8 0 100 200 300 400 00 O02 04 06 O08 10 Figure 12: Performance and user cost measures for experimental conditions for the “organize office” task. show a decrease across conditions of the percentage of situationally relevant responses and an increase across conditions in the percentage of not viable responses. From inspection of responses, this was due to many responses not being aligned with the specific office that the agent was situated in (e.g., referring to desk drawers instead of drawers). P40 STARS+0 r | sts, ? 1s H “CS10LLM Tokens 2 5 oO wt Ter1O sTaRS+0 : STARS P30 -stans}o STARS “0910 LLM Tokens 47 48
2306.06770#98
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
99
Ter1O sTaRS+0 : STARS P30 -stans}o STARS “0910 LLM Tokens 47 48 P30 P40 Ter1O -stans}o sTaRS+0 STARS : STARS STARS+0 r | sts, ? 1s H “CS10LLM Tokens 2 5 oO wt “0910 LLM Tokens 47 48 (a) Tidy kitchen. (b) Store groceries. (c) Organize office. # (a) Tidy kitchen. # (b) Store groceries. # (c) Organize office. Figure 13: Number of log10 tokens vs. words vs. task completion rate for all experimental conditions for the three tasks. Number of Objects with >= 1 Situationally Relevant Resp. TBP+O ST STS STAR STARS STARS+O il to) 5 10 15 20 25 30 35 Figure 14: Evaluating performance of STARS in terms of individual objects (left). oP: Sacked Bars by Prompt pe ‘Tee stached burs by Objet Hare a a a 110. us Pana oro. che ~ = sais romp pe st Sched by Oe de iL iz i _s0 satan gs sosasbnmtet ince - ie i | | | ~ Wa ded ! i i: Iz | i THEE bi OE Ma
2306.06770#99
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
100
Figure 15: Detailed summary of token usage by prompt type (left) and for individual objects (right) for the “tidy kitchen” task. The hatched areas summarize the prompts sent to the LLM and the solid areas the number of tokens received in response to those prompts. oP Stacked ar by remot Toe “Tor+0: sached ats by Fomet Type ‘Toe: tached Bar by Objet Handle Fore tae oa ongt ie nit ee Set tien = St stacked Bars by Pomst Te St Stacked 87s by Object Handle ort tes a cee STs Stacks Bars by Prompt ype STS staked as by object Henle ‘STA Stated Bas by Promote Lk FF ‘STARS: tacked Bars by Objet Hane : | E Al ie nell, STARS +0: Sacked Bars by Promote STARS: Stacked Bas by Obie MI nallls Figure 16: Detailed summary of token usage by prompt type (left) and for individual objects (right) for the “store groceries” task.. The hatched areas summarize the prompts sent to the LLM and the solid areas the number of tokens received in response to those prompts.
2306.06770#100
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
101
“oP: Sacked Bars by mgt pe “Tee tached burs by Objet Hare PEEL T8P40, Sacked brs by Promote ToP+0. sacked Bats by Objet Handle PEEEEGG Seen ew Sr Sacked Bars by Rompt Type Sr stacked 615 by Object ondle pepebid Carper shew STs Sache 81s by Pome Te STs Stacks rs by obec Hanae pepe peed STARS: Stacked Bars by Peet ype Tans peEniad ‘STARS 0: Stacked Barby Obi — Peprpiiii 7 7 i Figure 17: Detailed summary of token usage by prompt type (left) and for individual objects (right) for the “organize office” task.. The hatched areas summarize the prompts sent to the LLM and the solid areas the number of tokens received in response to those prompts.
2306.06770#101
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
102
Distribution of responses by error/correctness categories: TBP Distribution of responses by error/correctness categories: TBP+O nally relevant ¥ onally relevant nable alternative location unknown nable alternative location completion error unknown ompletion error diment limitation iment limitation ungrounded. but not reasonable ut not reasonable Distribution of responses by error/correctness categories: ST Distribution of responses by error/correctness categories: STS able alternative location i inable altemative | ion uninterpre} kemative locatio: uninterpre| completion error completion error diment limitation le but not reasonable le but not reasonable Distribution of responses by error/correctness categories: STAR Distribution of responses by error/correctness categories: STARS ly relevant ble altemative location able alternative location ‘ompletion error ompletion error uninterpre! diment limitation le but | le but not reasonable but not reasonable Distribution of responses by error/correctness categories: STARS+O ble altemative location : completion error uninkerpre) iment limitation Gm Situationally Relevant M™@™® Viable, Not Reasonable mms Not Viable lm Reasonable uninterpretable embodiment limitation fo Dut not rensonabie unknown-word post-completion error ungrounded-object / reasonable alternative location affordance-mismatch
2306.06770#102
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
103
Figure 18: Categorization of all LLM responses for the experimental conditions for “tidy kitchen” task. These charts illustrate the distribution of various categories of responses over all the LLM responses produced. Primary categories are: not viable, viable but not reasonable, reasonable but not situationally relevant, and situationally relevant. Further sub-categorization of responses is shown for the not viable and reasonable categories.
2306.06770#103
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
104
Distribution of responses by error/correctness categories: TBP tionally relevant affordance-mism! mative location Distribution of responses by error/correctness categories; ST jonally relevant nable alternative location completion error at NOT reasonable Distribution of responses by error/correctness categories: STAR nable alternative location unknown completion error but not reasonable ungrounded- Distribution of responses by error/correctness categories: STARS+O nable alternative location unknown- completion error but not reasonable ungrounded" Distribution of responses by error/correctness categories: TBP+O tionally relevant % affordance-mis’ mm viable but not alternative location Distribution of responses by error/correctness categories: STS » nable alternative location ‘completion error ut not reasonable Distribution of responses by error/correctness categories: STARS able altemative location unknown: ally relevant ni -completion error 6% but not reasonable ungrounded: @mm Viable, Not Reasonable lm Reasonable embodiment limitation post-completion error "reasonable alternative location Mm Situationally Relevant mm Not Viable uninterpretable (@m_ unknown-word ungrounded-object affordance-mismatch
2306.06770#104
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
105
Figure 19: Categorization of all LLM responses for the experimental conditions for “store groceries” task. These charts illustrate the distribution of various categories of responses over all the LLM responses produced. Primary categories are: not viable, viable but not reasonable, reasonable but not situationally relevant, and situationally relevant. Further sub-categorization of responses is shown for the not viable and reasonable categories.
2306.06770#105
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
106
Distribution of responses by error/correctness categories: TBP lly relevant uninterpre| le but not reasonable jance-mismatch Distribution of responses by error/correctness categories: ST relevant but not reasonable uninterprey dance-mismatch junded-object Distribution of responses by error/correctness categories: STAR uninterpre! Jance-mismatch Distribution of responses by error/correctness categories: STARS+O uninterpret rdance-mismatch ounded-object Distribution of responses by error/correctness categories: TBP+O lly relevant uninterpre| jc but not reasonable jance-mismatch Distribution of responses by error/correctness categories: STS relevant but not reasonable uninterpre] rdance-mismatch Distribution of responses by error/correctness categories: STARS evant ut not reasonable wninterpr idance-mismatch rounded-object @™ Situationally Relevant Mmm Viable, Not Reasonable @mm Not Viable lm Reasonable uninterpretable embodiment limitation unknown-word post-completion error ungrounded-object "reasonable alternative location affordance-mismatch
2306.06770#106
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
107
Figure 20: Categorization of all LLM responses for the experimental conditions for “organize office” task. These charts illustrate the distribution of various categories of responses over all the LLM responses produced. Primary categories are: not viable, viable but not reasonable, reasonable but not situationally relevant, and situationally relevant. Further sub-categorization of responses is shown for the not viable and reasonable categories. D Exploration of Variability As mentioned in the body of the paper, there is little variation from one run to another of the same condition (although there is slightly more variation in the tidy kitchen task in comparison to the other two tasks). This section of the appendix further explores what variability there is. Because running the experiment is somewhat expensive in time (especially in the oversight conditions) and not trivially inexpensive in the financial costs of LLM use, given the limited variability of the consequent results, we ran all conditions for the primary experiment only once.
2306.06770#107
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
108
Table 14 shows the detailed summary of measures for 10 runs of the STARS condition (no oversight) for all three of the tasks. Two additional lines summarize with mean and standard deviation for those data that vary in the STARS condition. The table follows the format of Table 11 and the definition of the individual measures are summarized in that table. Because STARS is not an oversight condition, the total number of instructions and total words do not change from run to run. Similarly, no goals are proposed to the user and thus there are no yes/no responses to those proposed goals. The results for tidy kitchen are also illustrated graphically in Figure 21. As these results show, there is little change in overall results from run to run. In tidy kitchen, the Task Completion Rate varies from 75% to 80%, or from 30 to 32 of the 40 state assertions defined for the final desired state. There are even smaller variations (in a relative sense) in the retrieval and token measures. In all 10 conditions, STARS produces a viable goal that is sourced by the robot to execute. Task Completion Rate Total Number of Instructions condition 2 5 a 20 40 60 80 100 of 0.0 0.2 0.4 0.6 0.8 1.0 Total Number of Retrieved Goals Total Number of Instructor Words condition r T T T T : 0 100 200 300 400 te) 100 200 300 400 500
2306.06770#108
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
109
Figure 21: Comparing the variation of outcomes over 10 STARS runs for the tidy kitchen task. While the lack of variability may appear unexpected, it is actually a consequence of the LLM’s embedded token probabilities (which are fixed once the LLM is trained) and the experimental design, in which an object’s gross location (“plate on the table” rather than a specific location on the table) is used for prompt generation. For any given object that the robot perceives, it will generate an instantiated prompt from the goal-description template using the gross location (“location: table”).7 While the task completion results from the other two tasks identical for all but one run, there is somewhat more (gross) variation in task completion in tidy kitchen. This results from the lack of context that was outlined in the body of the paper. For example, for the “mug on the counter,” the agent cannot directly perceive whether the mug is dirty or clean. Verified goals from the agent that the mug should go into the sink or cupboard are selected (i.e., by the Selection process) somewhat
2306.06770#109
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
110
7In other work, we have explored the effects of the number of examples for few-shot, in-context learning with template-based prompting, as well as analysis of how well particular prompt examples contribute to the four main requirements. However, for this experiment, we used a single, fixed example in all prompt templates, which means that for a given object in a gross location, the prompt will be exactly the same for that object. Rate(% ) C o m pletion goals Retrieved Proposed # Total Yes/N oInstructions TotalInstructions Totaluser w ords
2306.06770#110
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
111
C ondition Task goals Sourced Totalco m pletiontokens Totaltokens Totalpro m pttokens goals run1 run2 run3 run4 run5 run6 run7 run8 run9 run10 Mean Std. Dev. 77.5 75.0 77.5 75.0 75.0 80.0 80.0 77.5 80.0 77.5 77.5 2.04 360 347 357 355 354 364 359 357 353 355 356 4.5 – – – – – – – – – – – – tidy kitchen 130,950 35 125,666 35 128,841 35 130,476 35 128,255 35 133,645 35 130,657 35 130,082 35 130,521 35 129,067 35 129,816 – 2,077 – 3,682 3,552 3,605 3,674 3,633 3,728 3,666 3,647 3,658 3,594 3,643 50 134,632 129,218 132,446 134,150 131,888 137,373 134,323 133,729 134,179 132,661 133,459 2,124 14 14 14 14 14 14 14 14 14 14 – – – – – – – – – – – – – – 76 76 76 76 76 76 76 76 76 76 – – store groceries run1 run2 run3 run4 run5
2306.06770#111
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
112
– – – – – – – – – – – – – – 76 76 76 76 76 76 76 76 76 76 – – store groceries run1 run2 run3 run4 run5 run6 run7 run8 run9 run10 Mean Std. Dev. 94.4 94.4 94.4 94.4 94.4 94.4 94.4 94.4 88.9 94.4 93.89 1.76 171 173 175 176 178 176 177 179 178 177 176 2.4 – – – – – – – – – – – – 15 15 15 15 15 15 15 15 15 15 – – 60,069 60,443 60,784 60,558 60,990 61,041 61,321 61,620 62,502 62,222 61,115 776 1,739 1,683 1,675 1,720 1,710 1,697 1,706 1,707 1,730 1,737 1,710 21.6 61,808 62,126 62,459 62,278 62,700 62,738 63,027 63,327 64,232 63,959 62,865 783 6 6 6 6 6 6 6 6 6 6 – – – – – – – – – – – – – – 28 28 28 28 28 28 28 28 28 28 – – organize office run1 run2 run3 run4 run5 run6 run7 run8 run9
2306.06770#112
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
113
– – – – – – – 28 28 28 28 28 28 28 28 28 28 – – organize office run1 run2 run3 run4 run5 run6 run7 run8 run9 run10 Mean Std. Dev. 92.9 92.9 92.9 92.9 92.9 92.9 92.9 92.9 85.7 92.9 92.14 2.26 201 200 205 200 200 205 197 207 207 204 202 3.4 – – – – – – – – – – – – 12 12 12 12 12 12 12 12 12 12 – – 73,933 73,355 74,958 73,020 73,944 75,134 72,746 75,852 75,216 75,212 74,337 1,075 2,123 2,128 2,164 2,126 2,154 2,159 2,111 2,182 2,167 2118 2,143 24.7 76,056 75,483 77,122 75,146 76,098 77,293 74,857 78,034 77,383 77,330 76,480 1093 6 6 6 6 6 6 6 6 6 6 – – – – – – – – – – – – – – 28 28 28 28 28 28 28 28 28 28 – –
2306.06770#113
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06770
114
Table 14: Measures for the STARS condition over ten runs for the three experimental tasks. arbitrarily (i.e., the system lacks the context that “dishes out of their storage location should be assumed to be dirty”. Because the desired state for this object is always the sink or dishwasher, the agent sometimes places it in the desired location and sometimes not. Collectively, this lack of context accounts for the majority of differences observed for tidy kitchen task completion rate.
2306.06770#114
Improving Knowledge Extraction from LLMs for Task Learning through Agent Analysis
Large language models (LLMs) offer significant promise as a knowledge source for task learning. Prompt engineering has been shown to be effective for eliciting knowledge from an LLM, but alone it is insufficient for acquiring relevant, situationally grounded knowledge for an embodied agent learning novel tasks. We describe a cognitive-agent approach that extends and complements prompt engineering, mitigating its limitations and thus enabling an agent to acquire new task knowledge matched to its native language capabilities, embodiment, environment, and user preferences. The approach is to increase the response space of LLMs and deploy general strategies, embedded within the autonomous agent, to evaluate, repair, and select among candidate responses produced by the LLM. We describe the approach and experiments that show how an agent, by retrieving and evaluating a breadth of responses from the LLM, can achieve 77-94% task completion in one-shot learning without user oversight. The approach achieves 100% task completion when human oversight (such as an indication of preference) is provided. Further, the type of oversight largely shifts from explicit, natural language instruction to simple confirmation/discomfirmation of high-quality responses that have been vetted by the agent before presentation to a user.
http://arxiv.org/pdf/2306.06770
James R. Kirk, Robert E. Wray, Peter Lindes
cs.AI, cs.HC, cs.RO, I.2.6; I.2.7
7 pages, 8 figures, 3 tables, bibliography, appendix (34 pages total). Text revised and results extended with additional tasks
null
cs.AI
20230611
20230822
[ { "id": "2302.06706" }, { "id": "2112.04359" }, { "id": "2110.14168" }, { "id": "2209.11302" }, { "id": "2302.12246" }, { "id": "2208.09554" }, { "id": "2305.05658" }, { "id": "2303.17491" }, { "id": "2303.08774" } ]
2306.06331
0
3 2 0 2 t c O 1 3 ] L C . s c [ 3 v 1 3 3 6 0 . 6 0 3 2 : v i X r a Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination # Xuan-Quy Dao School of Engineering Eastern International University Binh Duong, Vietnam [email protected] Ngoc-Bich Le School of Biomedical Engineering International University, VNUHCM City Ho Chi Minh City, Vietnam [email protected] # ABSTRACT
2306.06331#0
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.07932
0
3 2 0 2 n u J 3 2 ] L C . s c [ 2 v 2 3 9 7 0 . 6 0 3 2 : v i X r a # Human-in-the-Loop through Chain-of-Thought Zefan Cai1,2, Baobao Chang1˚, Wenjuan Han3˚, 1National Key Laboratory for Multimedia Information Processing, Peking University 2School of Software and Microelectronics, Peking University, China 3Beijing Jiaotong University, Beijing, China [email protected]; [email protected]; [email protected]; # Abstract
2306.07932#0
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
1
This study offers a complete analysis of ChatGPT’s mathematics abilities in responding to multiple- choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT’s performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of 83%; but, as the difficulty level rose, it scored poorly, with an accuracy rate of 10%. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions.
2306.06331#1
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
1
Yongchao Chen1,2, Jacob Arkin1, Charles Dawson1, Yang Zhang3, Nicholas Roy1, and Chuchu Fan1 Abstract— For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks language. Recent advances in large described by natural language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural lan- guage directly into robot trajectories or factor the inference pro- cess by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex envi- ronmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website§ for prompts, videos, and code.
2306.06531#1
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
1
# Abstract While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demon- strates its weakness in long-term or multi-step logical reasoning. For example, users don’t always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) — a human-in-the-loop system enhanced by Chain- of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM’s reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost- utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines. # Introduction
2306.07932#1
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
2
and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of 70%, followed by VNHSGE mathematics (58.8%). However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
2306.06331#2
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
2
# I. INTRODUCTION Providing agents with the ability to find and execute optimal plans for complex tasks is a long-standing goal in robotics. Robots need to not only reason about the task in the environment and find a satisfying sequence of actions but also verify the feasibility of executing those actions given the robot’s motion capabilities. This problem is referred to as task and motion planning (TAMP), and there has been considerable research on efficient algorithms [1]. Classic solutions rely on specifying tasks in a dedicated planning representation, such as PDDL [2] or Temporal logics [3], that is both sufficiently expressive to specify task complexities (e.g. constraints on task execution) and amenable to such algorithms [2], [3], [4], [5]. While this approach to task specification has been quite successful, directly using these representations requires train- ing and experience, making them poor interfaces for non- expert users. As an alternative, natural language (NL) pro- vides an intuitive and flexible way to describe tasks. Pre- trained large language models (LLMs) have demonstrated surprisingly good performance on many language-related tasks [6], and there has been an associated burst of research 1Massachusetts Technology. [email protected], [email protected], [email protected] Institute of [email protected],
2306.06531#2
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
2
# Introduction Large language model-based Artificial Intelligence systems are augmenting humans in certain roles, and soon this trend will expand to the vast majority of the workforce. However, while the emergence of powerful language models [Sanh et al., 2021, Ouyang et al., 2022, Zhang et al., 2022, Shao et al., 2023] has made automation omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning [Hosseini et al., 2014, Kushman et al., 2014, Koncel-Kedziorski et al., 2015, Roy and Roth, 2016]. For example, users don’t always get desirable answers for a mathematical problem without human involvement. To make tangible progress in mitigating these errors is where we need humans, and a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Against this background, there comes a timing question: how to get a human-in-the-loop system in the most effective (namely, high-utility) and low-cost way?
2306.07932#2
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
3
Keywords ChatGPT · large language model · natural language processing · Vietnamese high school graduation examination # Introduction In recent years, artificial intelligence (AI) has drawn a lot of interest and been extensively discussed. AI represents a creative and imaginative advancement in many fields, including mathematics instruction. The current work analyzes a number of studies that looked into the application of AI in a number of contexts, including medical [1], educa- tion [2], [3], [4], [5] and pandemics [6]. The role of educators should not be replaced by AI in the educational process; rather, AI should be used to enhance it [8]. The implementation of AI in education faces a variety of challenges despite the potential benefits. In order to improve student learning outcomes and get around obstacles like a shortage of qualified teachers and resources [9], [10], using AI in education is becoming more popular [11], [12],[13], [14], [15]. According to research, AI is crucial for guaranteeing sustainable societal growth and can boost student accomplishment. Despite the fact that literature evaluations have been undertaken on the use of AI in education across a variety of subjects, little is known about how AI especially affects mathematics education, including its nature, target grade levels, and study methodologies. Achievement in mathematics is important for kids’ academic progress, future employment prospects,
2306.06331#3
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.07932
3
See Fig. 1 as an example. For humans, solving the whole problem in the leftmost box is often more difficult than solving one of the sub-logics (e.g., 2 ˚ p16 ´ 3q “ 25q. Correction of the erroneous sub-logic (e.g., 2 ˚ p16 ´ 3q “ 25 Ñ 2 ˚ p16 ´ 3q “ 26) helps LLM reach a correct final answer. In the last few years, thanks to explorations in Large Language Models (LLMs) and advances in in-context learning (ICL) technologies, giant breakthroughs have been obtained. Just by being fed an instruction, models can function very well on that task without manual finetuning [Brown et al., 2020a]. This provides a chance for a human to change the predicted results via natural language instructions as a flexible and friendly interface. Furthermore, changing the rationale for chain- of-thought (CoT) prompting [Wei et al., 2022] is even more user-friendly since short and simple Corresponding authors. Preprint. Under review.
2306.07932#3
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
4
and social growth, and it is connected to civil rights issues [16], [17]. Therefore, preparing students with math skills and knowledge is crucial for adapting to a society that is changing quickly and ensuring sustainable development. A comprehensive literature review was undertaken by bin Mohamed et al. [18] to provide an overview of AI in mathematics education for students at all levels of education, one of the few studies on the effects of AI on mathematics education. This review contributes to the discussion about enhancing teaching and learning in mathematics education through the use of AI. In a different study, Hwang [19] used 21 empirical studies with 30 independent samples to conduct a meta-analysis to assess the overall impact of AI on elementary children’ mathematical achievement. The results of the study revealed that AI had a negligible impact on primary kids’ mathematical proficiency. The results showed that grade level and topic of mathematics learning variables considerably reduced the impact of AI on mathematical achievement. Other moderator variables’ effects, however, were found to be insignificant. Based on the findings, this study offers both practical and theoretical insights that can help guide the appropriate application of AI in the teaching of mathematics to elementary school children. It is evident that additional meta-analysis is required to determine whether AI offers novel opportunities for mathematics learning [13], [15]. Studies examining how moderating variables affect the connection between them are also necessary.
2306.06331#4
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
4
applying them to task execution [7], task planning [8], [9], [10], [11] and TAMP [12], [13]. Promising early efforts used LLMs as direct task planners [8] generating a sequence of sub-tasks based on a set of nat- ural language instructions, but these approaches were limited by a lack of feedback and inability to verify whether sub- task sequences are executable. Further research addressed executability by connecting sub-tasks to control policy af- fordance functions [9], providing environmental feedback of robot actions [11], and interleaving action feasibility checking with LLM action proposals [12]; this last work also addressed long-horizon action dependencies. However, these approaches struggle with complex tasks involving temporally-dependent multi-step actions, action sequence optimization [9], [11], and constraints on task execution [12]. Furthermore, these frameworks factor the planning problem and use LLMs to infer a task plan separately from the motion plan. In many situations, the task and motion plan must be optimized together to fulfill the task. For instance, when the task is ‘reach all locations via the shortest path’, the order of places to be visited (task planning) depends on the geometry of the environment and the related motion optimization. Unfortunately, we find that LLMs do not seem capable of directly generating trajectories, possibly due to limitations in complex spatial and numerical reasoning [14], [15].
2306.06531#4
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
4
Input Sampling Stage _Filtering Stage _ Correction Stage Answer Stage [he has 16-3-13 eggs Q: Janet’s ducks lay 16 eggs per day. She eats nen? left. So she has $2*(16- Q: Janet's ducks lay iets EH! three for breakfast. She Soar aparcky Sie So ells the remainder for case “pre ast. She has 16-3=13 eggs e <a nm much +2. does shemake ever aioare | left. So she has 16*2 oes y 7 in 3=$13. lay? = 688. How much does [fa: she has 16-3213 1 she make every day? She has 16-311 eggs ones lerecolshelas A: Let's think step by left. So she has 16- $24(16:3)=$26. I The answer step. j 3*$2=$10. Saree is $26. LLM nae Q: Amy starts with 7 Amy starts with 7 Q: Amy starts with 7 Thonuer peanuts. Gerald peanuts, so she has 7 + peanuts. Gerald gives ea gives Amy 55 more. 55 = 62 peanuts. Amy 55 more. How How many peanuts many peanuts does does Amy end with? Amy now has 7 +55
2306.07932#4
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
5
The area of education could undergo a revolution owing to recent advancements in natural language processing (NLP), which have led to the development of increasingly complex language models like GPT-3. Due to its capacity to produce natural language answers to a variety of questions, ChatGPT, a large language model based on the GPT architecture, has attracted a great deal of interest in the educational community. In recent years, there has been an increase in interest in using chatbots, particularly ChatGPT, in education. Several research have investigated the possible advantages, issues, and difficulties of this practice. Halaweh [20] addressed educators’ worries about the adoption of ChatGPT into educational contexts, arguing for its inclusion and offering guidelines for safe implementation. In a research on the potential effects of ChatGPT on education, Zhai [21] recommended changing instructional objectives to emphasize students’ creativity and critical thinking. In their discussion of the possible advantages and difficulties of employing large language models in educational contexts, Kasneci et al. [22] placed emphasis on the requirement for competences and literacies to comprehend the technology and its constraints.
2306.06331#5
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
5
To benefit from both the user-friendliness of NL and the capabilities of existing TAMP algorithms, we approach the problem by using LLMs to translate from high-level task descriptions to formal task specifications. We are not the first to use LLMs in this way [16], [17], but our work addresses some limitations of prior approaches. Previous work translated natural language to Linear Temporal Logics (LTL) [18], which only considered the problem of task planning, and PDDL problem descriptions [16] or PDDL goals [17]. Here we utilize Signal Temporal Logic (STL) as the intermediary representation, allowing for more expressive constraints than LTL and facilitating integrated task and motion planning as with PDDL [19]. The LLM translation process can produce malformed (syn- tax errors) and semantically misaligned (semantic errors) formal task specifications. To address syntax errors, we adopt an existing iterative re-prompting technique that relies on an external syntax verifier to prompt the LLM with the specific syntactic error for correction [20]. Unfortunately, the lack of an external verifier makes this technique inapplicable for a semantic misalignment between the original natural language instruction and the translated specification. To address this problem, we contribute a novel autoregressive re-prompting
2306.06531#5
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.06331
6
The effectiveness of ChatGPT in assessments has also been examined in studies. (Kortemeyer, 2023) discovered that ChatGPT displayed several misconceptions and mistakes typical of a beginner learner yet would only about pass a calculus-based physics course. Katz et al. [23] conducted an experimental evaluation of GPT-4’s zero-shot performance on the complete Uniform Bar Examination (UBE), demonstrating that it performed better than human test-takers and previous models on the Multistate Bar Examination (MBE), which is a multiple-choice test. Gilson et al. [24] assessed ChatGPT’s performance on multiple-choice questions related to the USMLE Step 1 and Step 2 tests and discovered that its performance is comparable to a third-year medical student. These studies show the potential of chatbots to enhance education and legal services, but they also raise questions about their accuracy and dependability in assessments.
2306.06331#6
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
6
Language instruction iw) Reach all the goals and get keys ctor entring oor Remeber eave = State observation 6 N Scene objects: )) ue gatt ~ 5 can — = PN en A Bs Qvtm Es = ia 1 en a - ae — LLM-As-Translator & Checker ‘Language instruction: Sub-task ==) } = uk a On panne State observation LLM-As-Task Planner Language instruction |33- oom State observation LLM-As-Motion Planner Fig. 1. Illustration of different approaches applying LLMs for task and motion planning; our work contributes the LLM-As-Translator & Checker approach. Each approach accepts a natural language instruction and environment state as input and outputs a robot trajectory.
2306.06531#6
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
6
Figure 1: MCS comprises four stages: (1) sampling stage prompting the LLM using CoT prompting and replacing the greedy decoding by sampling from the LLM’s decoder to generate a set of rationales (i.e., the complete logical chain of CoT output); (2) filtering stage filtering out the samples ranked high by Diversity Entropy; (3) correction stage manually adding, deleting and modifying erroneous sub-logics in the most likely rationale of the filtered sample, and (4) answer stage prompting the LLM using CoT prompting again with manually corrected sub-logics and using greedy decoding to obtain the final answer. sub-logics in the rationale are easy for humans to handle. Whereas manual correction helps, the labor of this additional correction stage brings a direct and indirect cost (See Sec. 3 for more details). When and how humans intervene will greatly affect the cost and utility. Until recently, few researchers had explored this balance in ICL.
2306.07932#6
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
7
Through the simulation of various use cases, Frieder et al. [26] conducted a study to evaluate the mathematical proficiency of ChatGPT and determine its potential as a helpful assistant to professional mathematicians. The outcomes revealed that ChatGPT participants’ mathematical skills were significantly worse to those of the typical mathematics graduate student. However, it is critical to also assess ChatGPT’s mathematical prowess at lower levels, such as high school. This evaluation would shed light on ChatGPT’s capacity to support teachers and students in this level of mathematics learning.
2306.06331#7
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
7
technique that uses an LLM to evaluate whether the gener- ated plan is semantically consistent with the original instruc- tion. We re-prompt the model to check the alignment between the original instruction and the generated plan by providing the context of the instruction, the generated STL, and the out- put of the planner. We conduct comprehensive experiments in challenging 2D task domains, including several multi- agent tasks, and find that our approach outperforms direct LLM planning for tasks with hard geometric and temporal constraints. We show that, when combined with automatic syntactic correction, our technique significantly improves task success rates. We conduct an ablation study over the translation step by integrating a fine-tuned NL-to-STL model [21] with the AutoTAMP framework and show that GPT-4 few-shot learning is competitive with fine-tuning. In addition to our code, we publish a dataset of 1400 test cases consisting of the language instructions, environments, generated STL, and planner trajectory outputs. We conclude that in-context learning with pre-trained LLMs is well suited for language- to-task-specification translation for solving TAMP problems. # II. PROBLEM DESCRIPTION
2306.06531#7
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
7
We present the Manual Correction System (MCS ; Sec. 2) — a human-in-the-loop system, which explores when and how manual correction of rationales can efficiently improve LLM’s reasoning ability. To our knowledge, MCS is the first human-in-the-loop system leveraging rationales. As shown in Fig. 1, MCS consists of four stages: prompting the LLM with CoT, automatically filtering out the incorrectly predicted samples, human correcting their rationales, and prompting the LLM using CoT again to obtain the final answer. Referring to the “when” problem, we consider a diversity-based method to get a cue to indicate when humans should be involved, so as to reduce human labor as much as possible (See. 2.1). The diversity-based method is inspired by the diversity of the rationales. We have found that even when the desired answer is fixed, introducing the diversity degree of the rationales can be highly beneficial; therefore we introduce Diversity Metrics, as commonly used in Active Learning field [Brinker, 2003, Yang et al., 2015, Agarwal et al., 2020], to find data points requiring manual intervention. Then it comes to the “how” problem
2306.07932#7
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
8
NLP has received a lot of attention recently as a vital study area. Chatbots, one of its implementations, have drawn attention for its capacity to mimic human interactions. While current research highlights the potential of chatbots to support students’ learning in a variety of educational settings, their effectiveness in completing particular subjects, like mathematics, in high-stakes exams has received little attention. By evaluating ChatGPT’s ability to complete mathematical challenges and pass the VNHSGE exam, this study aims to fill this knowledge gap in the literature. This will be achieved by contrasting ChatGPT’s performance in our test with that of earlier assessments made by the OpenAI team [27]. This study intends to advance knowledge of the benefits of utilizing cutting-edge technology in education to enhance student results by studying the efficiency of AI-powered chatbots in assisting students in high-stakes tests. The results of this study may be especially helpful to educators and policymakers who want to use AI to enhance learning outcomes.
2306.06331#8
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
8
# II. PROBLEM DESCRIPTION As shown in Figure 1, we aim to convert a natural lan- guage instruction, including spatial and temporal constraints, into a motion plan for a robot encoded as a set of timed way- points, e.g., (xi, yi, ti). The environment state is encoded as set of named obstacles described as polygons and is provided as additional context. Our task is to generate a constraint- satisfying trajectory based on the given instruction and the environment state. The robot must not surpass its maximum velocity, and the total operation time should not exceed the task time limit. We assume that the full trajectory is a linear interpolation between the timed waypoints; complex trajectories can be specified by dense waypoint sequences. # III. METHODS Figure 1 illustrates three of the approaches we compare in our work, each using LLMs in some capacity. Each takes as input (1) a text-based representation of the global environ- ment state, (2) in-context examples for few-shot learning, and (3) a natural language instruction. The LLM-As-Translator & Checker approach is the contribution of this paper. Details and examples of context for prompting and re-prompting can be found in our code repository§. A. LLM End-to-end Motion Planning
2306.06531#8
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
8
2003, Yang et al., 2015, Agarwal et al., 2020], to find data points requiring manual intervention. Then it comes to the “how” problem (See. 2.2). We empirically prove the viability of paying attention to sub-logics instead of the whole problem. We define three operations (i.e., modifying, adding, and deleting) that a human can perform on the sub-logics of rationales for efficiency and simplification.
2306.07932#8
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
9
In this article, we concentrate on examining ChatGPT’s capability for resolving mathematical issues within the framework of the VNHSGE exam. The Vietnamese educational system places a high value on mathematics, which is frequently seen as a key predictor of student achievement. The promise of AI-powered tools for enhancing mathematics education can therefore be shown by analyzing ChatGPT’s mathematical capabilities in the context of the VNHSGE mathematics dataset [28]. Our work seeks to evaluate ChatGPT’s performance on mathematical inquiries in the VNHSGE exam critically and explore the prospects of deploying AI-powered tools to assist enhance mathematics teaching. 2 # 2 Objectives and Methodology # 2.1 Objectives This study aims to offer a thorough analysis of ChatGPT’s mathematical skills in relation to the mathematics evaluation for the VNHSGE exam. We seek to shed light on the possibilities of AI tools for educational support and investigate their role in changing the educational landscape by evaluating ChatGPT’s performance in these areas. This study also attempts to illustrate ChatGPT’s shortcomings when dealing with questions that differ from those present in the VNHSGE exam in terms of both structure and level of difficulty. # 2.2 Scope and Limitation
2306.06331#9
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
9
A. LLM End-to-end Motion Planning One natural idea is to use an LLM for both task and motion planning by directly generating a trajectory for a given language instruction; we refer to this as LLM End-to- end Motion Planning. In cases where the generated trajec- tory violates constraints, we re-prompt the model with the constraint violation to produce another trajectory, allowing up to five such re-prompts. Figure 2 shows this pipeline, including a specific failure case with two constraint-violating trajectories. B. LLM Task Planning A more common approach is to use an LLM to handle the task planning by directly generating a sequence of sub- tasks from a given language instruction; we refer to this as LLM Task Planning. To generate a final trajectory, the sub- tasks are handled by an independent motion planner. In this work, these sub-tasks are limited to navigation actions, and the motion planning is handled by the STL planner used by our proposed approach; this permits fair comparison of results across methods. Each sub-task is converted to STL to be consumed by the planner. We evaluate and compare against three methods that each use LLMs for task planning: (1) Naive Task Planning, (2) SayCan, and (3) LLM Task Planning + Feedback.
2306.06531#9
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
9
With the development of Artificial Intelligence (AI), some companies have started to explore the use of LLMs in practice (e.g., IBM implementing AI processes in HR [BENJ EDWARDS, 2023]). Therefore, we propose a Cost-utility Analysis Model for Human-in-the-LOoP systems (CAMLOP ; Sec. 3) to analyze and balance the cost and utility. CAMLOP describes the cost-utility ratio that is introduced from the economics theory into the AI field to quantify these two factors (i.e., cost and utility) and spread the two factors across various aspects (e.g., time and money as cost; accuracy and user satisfaction as utility) so that reliable scores of various aspects are achieved. We instantiate MCS with twelve datasets across three classes of tasks — arithmetic, commonsense, and symbolic reasoning (Sec. 4). MCS achieves new state-of-the-art levels of performance across most of the tasks. To show the applicability in real-world business, we apply CAMLOP to practice by posing an example to illustrate the balance between utility and cost in Sec. 4.5. Notably, a significant advantage w.r.t cost and utility proves our MCS ’s superior over strong baselines. 2 # 2 Manual Correction System
2306.07932#9
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
10
# 2.2 Scope and Limitation By analyzing ChatGPT’s responses to questions from the VNHSGE exam that involve mathematics, this study seeks to assess ChatGPT’s mathematical capabilities. Our objective is to assess how well ChatGPT responds to these questions and to provide details on ChatGPT’s potential in the context of Vietnamese education. It’s important to remember that our evaluations are restricted to the unique the VNHSGE exam structure. The results of ChatGPT are incapable of being extrapolated to tests with other numbers or difficulty levels. This restriction highlights the need for caution when extrapolating from our results and making generalizations regarding ChatGPT’s potential uses in educational contexts outside the scope of this study. # 2.3 Methods In this study, we evaluated the capability of the ChatGPT model to answer mathematical problems in the VNHSGE mathematics dataset [28]. Using a sequence-to-sequence methodology, the model was developed using a dataset of math problems after being trained on a sizable corpus of text. The mathematical problem was the model’s input, and the solution was its output. We compared the produced answers from ChatGPT with the accurate responses given in the exam papers in order to evaluate its performance.
2306.06331#10
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
10
Naive Task Planning As proposed by [8], we evaluate using LLMs to generate the entire sub-task sequence without checking for executability. SayCan Alternatively, an LLM can be iteratively prompted to generate each subsequent sub-task conditioned on the previous sub-tasks in the sequence. The next sub-task can be selected from the top K candidates by combining the language model likelihood with a feasibility likelihood of the candidate action and choosing the most-likely next sub-task. This is the method proposed by [9]. We set K to 5 in our evaluations. LLM Task Planning + Feedback A third task planning method combines full sequence generation with feasibility checking to both find sub-task sequences that satisfy the full task and verify their feasibility before execution. For any infeasible sub-tasks, the LLM can be re-prompted with feedback about the infeasible actions to generate a new sub- task sequence. This is similar to the hierarchical method proposed by [12] but with feedback for re-prompting. C. Autoregressive LLM Specification Translation&Checking + Formal Planner Unlike LLM Task Planning, our approach translates NL to STL with an LLM and then plans the trajectory with an STL planner, as shown in Figure 1. We include two re- prompting techniques to improve translation performance:
2306.06531#10
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
10
2 # 2 Manual Correction System MCS automatically finds the incorrectly predicted samples to indicate when humans should be involved (Sec. 2.1) and then provides efficient operations to indicate how to correct rationales (Sec. 2.2). Fig. 1 shows the whole four stages in MCS. The first and final stages are simple prompting. The intermediate filtering stage and correction stage are our focus, as detailed below. # 2.1 Filtering Stage As shown in Fig. 1, after the first stage, the LLM samples three plausible rationales for a math problem that arrive at different answers. Just like humans, LLMs may make countless and various mistakes, but there are only a limited number of correct rationales for the right result. If most of the sampled rationales cannot make agreements, with a high probability this sample is wrongly predicted. To empirically prove that, we conduct quantitative experiments and discover that incorrectly predicted samples tend to have greater diversity in their final answer when solving difficult reasoning problems. (Please refer to Appendix A for more details).
2306.07932#10
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
11
We created a detailed process with many phases to carry out this examination. In the beginning, we gathered information from official test papers made available by the Vietnamese Ministry of Education and Training. We chose these questions as an accurate representation of the actual exam because they were all taken from high school mathematics exams. The data needs to be formatted in a way that ChatGPT could interpret afterward. The exam questions contained mathematical equations and symbols, which we transformed into LaTeX format to display in a uniform manner. The exam questions were then transformed from their LaTeX format into JSON (JavaScript Object Notation), a lightweight data transfer standard that is frequently used in web applications. We were able to give the questions to the pre-trained ChatGPT model and get its generated answers after formatting the data in a way that ChatGPT could understand. Finally, we determined ChatGPT’s performance score by comparing the generated answers to the accurate responses provided by the exam papers. Overall, this methodology allowed us to thoroughly evaluate ChatGPT’s capacity to answer mathematical problems in the VNHSGE exam. By outlining the specific procedures, we took, we intend to offer a framework for future research examining the efficiency of chatbots powered by AI in assisting students in demanding exams. # 3 Dataset
2306.06331#11
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
11
User prompt1: Task explanation: the trajectory to fulfill the instruction .. Few-shot examples + Environment settings: red, position and size: function: kitchen] ..! + Instruction: 'At some point go to the yellow box, and at some point go to the red box, and then enter the green box, and always do not enter the blue area.’ ‘Hope you can help me plan ‘[name: rooml, color: [0, 0.9, -1, -0.5], GPT-4 responsel: [(-1.3, -1.3, 0), (-0.95, -0.2, 0.5), (0.45, - 0.95, 0.9), (0.4, 0.95, 1.5)u] User prompt2: Your trajectory between subpoint3 and subpoint4 enters the blue box, you should avoid it. GPT-4 response2: [(-1.3, -1.3, 0), (-0.95, -0.2, 0.5), 0.95, 0.9), (-0.35, 0.55, 1.45)] (0.45, Robot Trajectory 15 10 0.54 0.04 y-axis -05 4 -1.04
2306.06531#11
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
11
Specifically, the LLM is prompted with a set of manually written CoT exemplars following Wei et al. [2022] in the first stage. (Please refer to Appendix for more details) Then, we sample a set of candidate outputs from the LLM’s decoder to generate a set of rationales1. Finally, we use the diversity degree to identify the most likely incorrect sample for humans to involve. Here, we adopt a widely-used method to select the samples: Diversity Entropy [Brinker, 2003, Yang et al., 2015, Agarwal et al., 2020]. A further study about Diversity Entropy in Sec. 4.4 quantitatively demonstrates its advantage.
2306.07932#11
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
12
# 3 Dataset The VNHSGE mathematics test dataset for the academic years 2019–2023 was used in this investigation. 250 multiple- choice math questions covering a range of subjects, such as algebra, geometry, and calculus, make up the dataset. Based on Bloom’s Taxonomy, these questions were divided into four difficulty levels: K (knowledge), C (comprehension), A (application), and H (high application). The Vietnamese Ministry of Education and Training publicly released the dataset, which is frequently used to evaluate students’ mathematical aptitude. # 3.1 Question Levels Different levels of competence in comprehending and using mathematical concepts are necessary for solving mathemat- ical problems. The dataset includes a range of levels of difficulty, from K-based questions that evaluate fundamental understanding to high-application questions that assess the capacity to analyze and synthesize information in order to solve complex problems. This allows for a thorough evaluation of ChatGPT’s mathematical problem-solving abilities. Based on the sort of cognitive activity and verbs used in responding to the questions, the four levels of complexity—K, 3 C, A and H—were established. We can learn more about ChatGPT’s strengths and drawbacks when we evaluate its performance on a range of mathematical problems of varying degrees of difficulty. # 3.2 Question Topics
2306.06331#12
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
12
Fig. 2. GPT-4 failure case for direct end-to-end trajectory planning. The orange line shows the correct path obeying the instruction. The purple and gray dashed lines show the trajectories from GPT-4 after first and second prompts, respectively. GPT-4 generates a list of (x, y) locations with associated timestamps. The initial prompt describes the language modeling task, environment state, and instruction. Each object is a rectangle described by (x, y) boundaries. one for syntactic errors and another for semantic errors. By “semantic error”, we mean a misalignment between the intended task described in natural language and the STL expression to which it is translated. Figure 3 shows the structure of the context for re-prompting the model for semantic error correction; we include a full prompt example in our code repository§. In this work, we use STL [22] as a formal task specification that supports contin- uous real-time constraints suitable for time-critical missions. An STL formula is defined recursively according to the following syntax: ϕ ::= πµ | ¬ϕ | ϕ∧φ | ϕ∨φ | F[a,b]ϕ | G[a,b]ϕ | ϕU[a,b]φ
2306.06531#12
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
12
Formally, given a manually written CoT prompt and a sample s, MCS decodes a set of N outputs, where each output ri is a sequence of tokens representing the i-th rational, then the rational ri is used to obtain the answer ai. As previously demonstrated, a greater diversity of the set of answers indicates potential incorrect predictions and flags a sample for humans to involve. First, we obtain the predicted answer ai though arg maxai P pri, ai | sq. For example, in Fig. 1, ri is She has 16 ´ 3 “ 13 eggs left. So she has 16 ˚ 2 ´ 3 “ $13., and ai is $13. Then we calculate the answer distribution for the answer set tai,¨¨¨ ,N u of s. For each distinct value a P tai,¨¨¨ ,N u, the probability is as follows: ř pa “ |N | i“11pai “ aq |N | (1) where |N | denotes the number of answers. For example, in Fig. 1, there are three answers as well as three rationales. We use the answer entropy as the Diversity Entropy (DE) score for the sample s: # ÿ DE “ ´pa log pa (2) # aPtaiu
2306.07932#12
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
13
# 3.2 Question Topics The dataset provides a thorough assessment of ChatGPT participants’ mathematical knowledge and abilities by encompassing a wide range of mathematical topics. M11A: Combinations and Probability; M11B: Number Series (Arithmetic progression, Geometric progression); M11C: Spatial Geometry; M12A: Derivatives and Applications; M12B: Exponential and Logarithmic Functions; M12C: Primitives and Integrals; M12D: Complex Numbers; M12E: Polyhedrons; M12F: Rotating Circle Block; and M12G: Oxyz Spatial Calculus. These topics were included to ensure a thorough evaluation of the ChatGPT’s mathematical abilities by testing its understanding, application, analysis, and evaluation of mathematical concepts and principles. Researchers can learn about ChatGPT’s strengths and limitations and identify opportunities for development by analyzing how well it performs across all of these issues. # 3.3 Knowledge matrix
2306.06331#13
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
13
where ϕ and φ are STL formulas, and πµ is an atomic predicate. ¬ (negation), ∧ (and), ∨ (or), ⇒ (imply), and ⇔ (equal)) are logical operators. F[a,b] (eventually/finally), G[a,b] (until) are temporal operators with real-time constraints t ∈ [a, b]. The ac- tion primitives in this work are ’enter(room name)’ and ’not enter(room name)’. (1) objects/rooms in the whole environment are known, which serves as the environment information to the STL planner.
2306.06531#13
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
13
# ÿ DE “ ´pa log pa (2) # aPtaiu The higher the DE score, the more likely it needs manual correction. A threshold α is set for DE as the hyper-parameter. # 2.2 Correction Stage Referring to how humans should involve in the loop, the most straight-forward idea is humans handling the filtered samples while the LLM processes the rest samples. However, humans handling the sample as a whole problem is still labor-consuming, especially for those difficult mathematical problems. Due to this, we claim that humans should pay local attention to simple sub-logics in the rationale. Here, a sub-logic is typically a group of words that can stand alone as a complete thought in a complex rationale. We denote a sentence as a sub-logic. To support our claim, there exist some premises. Firstly, an incorrect rationale could output the correct final answer after correcting the erroneous sub-logic in the rationale. To empirically prove
2306.07932#13
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
14
# 3.3 Knowledge matrix A key element of assessment systems that gives a thorough breakdown of the criteria and content to be evaluated is the question matrix. To create and compile questions for various tests and examinations, this technical design was deployed. It acts as a reference for test designers in choosing appropriate questions that appropriately reflect the educational and learning objectives of the assessment system. By ensuring that the test questions assess the desired knowledge, skills, and abilities of the examiners and that they are aligned with the learning outcomes, the question matrix aids in assuring the validity, reliability, and fairness of the assessment. As a result, the question matrix is an essential tool for creating high-quality tests that accurately assess student achievement and guide educational decisions. A knowledge matrix, which classifies each question according to its specific level and topic, can effectively depict the structure and substance of an exam. Administrators of exams and educators can gain a lot from employing a knowledge matrix since it can be used to determine where students’ knowledge is strong and weak and to build focused interventions to boost performance. Additionally, the knowledge matrix makes sure that the exam covers a wide range of subjects and levels of difficulty, providing a thorough evaluation of student’s knowledge and abilities. The usage of a knowledge matrix ensures that exam results accurately reflect students’ abilities and accomplishments by increasing the validity and reliability of exam scores.
2306.06331#14
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
14
(1) objects/rooms in the whole environment are known, which serves as the environment information to the STL planner. Syntactic Checking & Semantic Checking Open-loop translation can suffer from syntactic and semantic errors. We use two re-prompting techniques to automatically correct such errors. Like [20], we use a verifier to check for syntax errors (we use a simple rules-based STL syntax checker); any errors are provided as feedback when re-prompting the LLM to generate corrected STL. We repeat until no errors are found (up to five iterations). For semantic errors, we propose a novel autoregressive re-prompting technique; we provide the STL planner’s generated state sequence (i.e., [[in(road), 0], [in(red kitchen), 0.5], [in(blue restroom2), 1.2],...]) as context alongside the original instruction and ask the LLM to check whether the plan aligns with the instruction’s semantics. If it does not, the LLM is prompted to modify the STL, which repeats the syntactic and semantic re-prompting. This process terminates in the case of no detected errors or no change in STL (up to three iterations). The structure of the semantic error prompt is shown in Figure 3; full example prompts can be found in our code repository§.
2306.06531#14
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
14
To support our claim, there exist some premises. Firstly, an incorrect rationale could output the correct final answer after correcting the erroneous sub-logic in the rationale. To empirically prove 1Most existing sampling algorithms including temperature sampling [Ackley et al., 1985, Ficler and Goldberg, 2017], top-k sampling [Fan et al., 2018, Holtzman et al., 2018, Radford et al., 2019] and nucleus sampling [Holtz- man et al., 2019] could be used for sampling the required rationals. Here we follow Wang et al. [2022] for a fair comparison. Other sampling methods can also bring a general benefit. 3 that, we conduct quantitative experiments for twelve datasets and discover that in general up to 50% of errors of CoT indeed are caused by incorrect intermediate rationales. After correcting these 50% incorrect rationales, the final answers turn out to be correct. Secondly, correcting sub-logics indeed solves the majority of incorrect rationales. We conduct the analytical experiment across multiple tasks in Sec. 4.3 and provide the evidence. Thirdly, the questionnaire survey shows that correcting each sub-logic independently is much easier and more user-friendly for humans than checking the entire rationale (Please refer to Appendix B for more details).
2306.07932#14
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
15
The knowledge matrix for the VNHSGE exam in Mathematics for the years 2019-2023 is displayed in Table 1. We have a distribution of questions based on the topics and degree of difficulty. We can identify a specified number of question levels pertinent to the issue based on the distribution. The distribution of questions by level is shown in Figure 1 as follows: knowlegde 103 (41%), comprehension 77 (31%), application 41 (16%), and high application 29 (12%). M11A -10 (4%), M11B - 5 (2%), M12C - 8 (3%), M12A - 57 (23%), M12B - 39 (16%), M12C - 33 (13%), M12D - 26(10%), M12E - 17(7%), M12F - 14(6%), and M12G - 41(16%) are the breakdown of questions by type. Generally, the knowledge matrix offers a thorough overview of the exam’s structure and content, making it possible to assess and enhance students’ mathematical understanding and problem-solving skills. The exam framework does not have a uniform allocation of questions. There are some topics and problems that just call for knowledge and comprehension, not a high-level application.
2306.06331#15
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
15
STL Trajectory Planner We use a state-of-the-art multi-agent STL planner [23] that uses piece-wise linear reference paths defined by timed waypoints to recursively encode the constraints expressed in the provided STL ex- pression. It defines the validity of an STL formula with respect to a trajectory and then optimizes the trajectory to maximize the validity. The planner not only searches for a sub-task sequence but also optimizes the time efficiency under dynamical constraints of robot maximum velocity. Here we assume that the locations and shapes of all the # IV. EXPERIMENTAL DESIGN Each task scenario is set in a 2D environment and entails navigation of one or more robots; the robots have extent in the environment and are initialized with varying start posi- tions. Each environment consists of regions with shapes, lo- cations, and properties (e.g., color, name, function). For each method, the LLM is initially prompted with a description of the language task (e.g. task planning or translation) and five in-context examples for that task. To mitigate variance across prompts, we initially tested six different sets of examples for
2306.06531#15
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
15
Specifically, in the correction stage, we ask humans to check the filtered sample and only correct the rationale with the highest probability. During the correction, to simplify, the operations that a human can perform on the sub-logics include “modifying”, “adding”, and “deleting”. As shown in Tab. 1, the first cause displays the modifying operation. After the modifying operation, the corrected sub-logic “3 ˚ 100 ` 8 ˚ 10 ` 3 ˚ 1 “ 383” helps the LLM output the correct answer. # Correction Operation: Modifying QUESTION: Q: I have 3 hundred, 8 tens, and 3 ones. What number am I? A: RATIONALE: I have 3 hundred, 8 tens, and 3 ones. That means I have «Before Modifying»: 3 ˚ 100 ` 8 ˚ 10 ` 3 ˚ 1 “ 303 «After modifying»: 3 ˚ 100 ` 8 ˚ 10 ` 3 ˚ 1 “ 383. Correction Operation: Deleting
2306.07932#15
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
16
framework does not have a uniform allocation of questions. There are some topics and problems that just call for knowledge and comprehension, not a high-level application. A majority of the questions-roughly 70%-are focused on knowledge and comprehension. In addition, only 10% of the questions concentrate on information from the 11th grade, while 90% are at the 12th grade level. Questions on subjects like M12A, M12B, M12G, and M12C are plentiful. It should be emphasized, nonetheless, that the questions in topic M11B only call for a certain level of expertise.
2306.06331#16
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
16
[Initial Prompt for Translation] <Translation Task Explanation> <Few-shot Examples> <Environment State> [LLM Translation Response] <STL Translation> [Rules-based Syntax Checker] <Description of Errors> OR <Correct!> [STL Planner Output] <STL Planner Trajectory> <State Sequence of Trajectory> [Prompt for Semantic Errors] <Original Novel Instruction> <Chain-of-thought Prompt to Compare Plan to Original Instruction> [LLM Semantic Error Response] <Chain-of-thought Reasoning> <Description of Errors> <Corrected STL> OR <Correct STL> Fig. 3. High-level structure of the prompt used for AutoTAMP. The arrow on the right indicates re-prompting for syntax error correction. The arrow on the left indicates re-prompting in cases of semantic errors. each method and chose the one that performed best. Through this testing, we found that the variance over prompts was insignificant relative to overall performance.
2306.06531#16
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
16
Correction Operation: Deleting QUESTION: Clarence has 5 oranges. He gets 3 more from Joyce. Later, Clarence buys 9 Skittles at the store. How many oranges does Clarence have in all? A: RATIONALE: Clarence has 5 oranges. He gets 3 more from Joyce, so now he has 5 ` 3 “ 8 oranges. «Delete»: Later he buys 9 Skittles at the store, so he has 8 ´ 9 “ ´1 oranges. Correction Operation: Adding QUESTION: Q: There are 83 trees in a park. 36 of them are willows and the rest are oaks. How many more oaks than willows are there in the park? A: RATIONALE: There are 83 trees in the park. 36 of them are willows, and the rest are oaks. This means there are 83 ´ 36 “ 47 oaks in the park. There are 47 more oaks than willows. «Add»: There are 36 willows and 47 oaks in the park now, so there are 47 ´ 36 “ 11 more oaks than willows. Table 1: Examples of manual correction for incorrect sub-logic. The operations that a human can perform on the rationales include modifying, adding, and deleting. # 3 Cost-utility Analysis Model for Human-in-the-Loop Systems
2306.07932#16
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
17
The distribution of question levels and topics as a percentage is shown in Figure 1. The topic M12A, which comprises 23% of the total questions, is distributed as follows: 9.60% at the K level, 6.00% at the C level, 2.40% at the A level, and 4.80% at the H level. We may analyze the performance of the student or ChatGPT specifically by level and topic based on the thorough distribution by level and topic. A comprehensive grasp of the distribution of questions across various levels and topics is made possible by this graphic portrayal. Insights into the areas where test takers are anticipated to perform well and those that could need more improvement can be obtained by examining Figure 1. It offers useful data that teachers and curriculum designers may use to better understand the strengths and weaknesses of their students and the efficiency of their instructional strategies. Overall, Table 1 and Figure 1 together give a thorough breakdown of the distribution of the questions and are an effective tool for educational study and practice. 4 # Table 1: Knowledge matrix in 2019-2023
2306.06331#17
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
17
each method and chose the one that performed best. Through this testing, we found that the variance over prompts was insignificant relative to overall performance. We evaluated the different methods described in Section III across six different task scenarios (three single-agent and three multi-agent) with different combinations of geomet- ric and temporal constraints. For each scenario description below, we indicate the presence of these constraints below with G and T, respectively. For each method, we evaluate performance with both GPT-3 and GPT-4 as the LLM. Note that in multi-agent scenarios, we do not test SayCan or LLM Task Planning + Feedback because these methods are not straight-forwardly adaptable for multiple agents. For multi- agent tasks, the agents are assigned a subtask and a time for completion at each time step; since the time for completion is often different, it is not obvious how/when to check and provide feedback. We also terminate and report failure for test cases that take more than 90 minutes. We automatically check resulting trajectories via hard-coded checkers. The full set of experiments took two weeks using four 16-core CPUs; the cost of LLM API calls for evaluating all of the approaches was ∼1500 USD.
2306.06531#17
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
17
# 3 Cost-utility Analysis Model for Human-in-the-Loop Systems CAMLOP introduces the cost-utility relation that is introduced from the economics theory [Varian, 2014] into the AI field to quantify these two factors (i.e., cost and utility). For human-in-the-loop systems like MCS , we divide the goods into two simple categories: human labor and LLM. Company strategic decision-makers always choose the best bundle of goods they can afford/cost. The costs include direct and indirect costs. The direct cost is the money the goods spent while indirect costs mainly include overhead costs from management and rent. Indirect costs also include intangible costs, such as the impact on customers, employees, or delivery times should be considered. Utilities include boosted accuracy, social prestige, and user satisfaction. For simplicity, we only consider money and time for cost while considering accuracy and user satisfaction for utility in our experiments.
2306.07932#17
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
18
4 # Table 1: Knowledge matrix in 2019-2023 M11C M11B M11A M12A M12B M12C M12D M12E M12F M12G LEVEL K 1 5 5 24 15 13 8 8 7 17 103 41% C 6 4 15 14 8 10 3 2 15 77 31% A 1 1 6 5 9 5 5 5 4 41 16% H 12 5 3 3 1 5 29 12% TOPIC 8 3% 5 2% 10 4% 57 23% 39 16% 33 13% 26 10% 17 7% 14 6% 41 16% 250 100% 9.60 10.00 8.00 eK 6.00 aC 2.40 4.00 40 ‘| aA =H 2.00 0, 0.00 ofr - & & DS & & \ S \ Vv Vv Vv Vv Vv wv Vv HMMM HEM HM WW Loy Figure 1: Distribution of the number of questions by levels and topics in percentage. # 3.4 Prompt and Answer
2306.06331#18
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
18
HouseWorld1 (single-agent) As shown in Figure 4(a), this is a house environment from [24]. We first manually constructed 10 different instructions of varying complexity before prompting GPT-4 to paraphrase each into 9 differently worded instructions with the same meaning, resulting in 100 total instructions for this environment. For each instruction, we randomly initialize between two start-end position pairs for 200 total test cases. For this scenario, we do not impose a hard time constraint for the planned trajectory. HouseWorld2 (T, single-agent) This scenario is identi(a) HouseWorld (b) Chip’s Challenge (c) Overcooked (d) Rover net (e) Wall I q mw | ele ae HouseWorld and Chip’s Challenge are single-agent scenarios. Fig. 4. Overcooked, Rover, and Wall are multi-agent scenarios. The black square in Overcooked is inadmissible. The lines indicate the correct trajectories following the instructions. For the HouseWorld and Chip’s Challenge environments, the black round dot and pentagonal dot indicate the start and end positions, respectively.
2306.06531#18
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
18
We draw Fig. 2 where the horizontal axis x1 and vertical axis x2 are the quantity of human labor and LLMs respectively. First, we introduce notations related to the cost. We define p1 ˚ x1 as the cost spent on human labor and p2 ˚ x2 as the cost spent on the LLMs. We indicate the bundle by px1, x2q (a data point in Fig. 2). The corresponding unit price is p1 and p2. The total cost the company decision-maker has to spend is denoted as y. Therefore, the budget constraint can be represented as p1x1`p2x2 ď m. The solid straight line is the set of data points that cost exactly y: p1x1 ` p2x2 “ m. To note, the cost contains various aspects as mentioned before. In Fig. 2, for simplicity, we express these different aspects as a unified value according to a unified standard. Then we introduce utilities 2. Figure 2: Illustration of CAM- LOP. 2Most notations are following those from [Varian, 2014] 4
2306.07932#18
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
19
Figure 1: Distribution of the number of questions by levels and topics in percentage. # 3.4 Prompt and Answer When asking questions to ChatGPT, we can receive answers in different formats. However, to make the process of handling results easier and ensure consistency, we kindly ask ChatGPT to provide replies in a specific structure. Figure 2 and Table 2 demonstrate an example of the required structure for ChatGPT responses. This table demonstrates the adaptability and versatility of the model by giving instances of how ChatGPT can respond to different cues in various formats. When we receive automatic responses, we utilize Word format on https://chat.openai.com/ but "OpenAI API" uses Json format. The table is divided into three columns: the first column reveals the prompt’s format; the second column displays the prompt itself; and the third column provides the response that ChatGPT created. This table demonstrates the adaptability and versatility of the model by giving instances of how ChatGPT can respond to different prompts in various formats. When we receive automatic responses, we utilize Word format on https://chat.openai.com/ but "OpenAI API" uses Json format. The table shows how ChatGPT can provide responses to prompts in many formats, which is a useful feature for many applications. # 4 Results
2306.06331#19
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
19
cal to HouseWorld1, but each planned trajectory is subjected to a hard time constraint. This time limit is pre-determined by completing the correct trajectory with 0.8 maximum velocity. The remaining task scenarios were designed with specific rules and goals for the agent(s) to follow. For each scenario, GPT-4 was used to paraphrase the original description into 20 uniquely worded variants with the same meaning, which are further checked by humans. We instantiate three different instances of the environment for each scenario and randomize five different start/end location pairs for a total of 300 test cases. Chip’s Challenge (G, single-agent) Figure 4(b) shows a scenario inspired by a level from Chip’s Challenge, a classic puzzle solving game with strict geometric and logical constraints. The robot must reach all goal regions (blue) but must acquire a unique key to pass through the corresponding door. Overcooked (G & T, multi-agent) Figure 4(c) shows a scenario inspired by Overcooked, a popular cooking simula- tion game with strict time constraints. The agents must coop- eratively gather ingredients and return to CookingRoom in a limited time. The multi-agent motion planning is challenged by limited space for agents to maneuver.
2306.06531#19
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
19
Figure 2: Illustration of CAM- LOP. 2Most notations are following those from [Varian, 2014] 4 A utility function upx1, x2q is a way to assign a utility value to the bundle px1, x2q. As shown in Fig. 2, the set of all data points px1, x2q such that upx1, x2q equals a constant is called a level set (solid curve). Those data points on higher indifference curves are getting larger utility. We adopted a commonly used utility function— Cobb-Douglas3 utility function upx1, x2q “ xc 2, where c and d are positive numbers that we need to learn 4. Given a model parameterized by c, d, and a fixed cost y, the model predicts the optimal choice px˚ 2 q with the highest utility, which is desired by the company strategic decision-makers. Note an important feature of this optimal choice: at this data point the indifference curve is tangent to p1x1 ` p2x2 “ y. To note, we introduce the modeling of CAMLOP in this section. More details about the inference and learning are shown in Appendix C and Appendix D. # 4 Experiments # 4.1 Setup
2306.07932#19
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
20
# 4 Results The VNHSGE dataset’s mathematics exam is intended to evaluate ChatGPT’s mathematical knowledge and problem- solving skills. The test consists of 250 questions in the VNHSGE mathematics dataset [28], divided into ten topics 5 Pre-question I want you to answer the question in the following structure: Choice: "A" or "B" or "C" or "D" Explanation: Explain the answer The question is: Question New Question prompt ChatGPT Response Figure 2: Formatted question and ChatGPT response. Table 2: An example of prompt and response. Question (Word format): C IA E 1) The volume of a cube with edge 2a is: A. 8a^3 The volume of a cube B. 2a^3. A with edge 2a is: C. a^3 V=(2a)^3=8a^3. D. 6a^3. # ID IQ Q 1
2306.06331#20
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
20
Rover (G & T, multi-agent) Figure 4(d) is a scenario used by [23]. Multiple agents must reach each observation region (blue) before transmitting their observations from a red region, all while subjected to time and energy constraints. Wall (G & T, multi-agent) Figure 4(e) is also from [23]. Multiple agents must occupy each goal region (blue) while subject to a time constraint and a maneuver bottleneck. # V. RESULTS We report the task success rates for the single-agent and multi-agent scenarios in Table I and Table II, respectively. For HouseWorld1 (Figure 4(a)) with no hard time constraint, TABLE I TASK SUCCESS RATES FOR SINGLE-AGENT SCENARIOS. EACH SCENARIO’S CONSTRAINTS ARE LISTED IN THE TABLE. # 3 - T P G # 4 - T P G | |
2306.06531#20
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
20
To note, we introduce the modeling of CAMLOP in this section. More details about the inference and learning are shown in Appendix C and Appendix D. # 4 Experiments # 4.1 Setup Tasks and datasets. For arithmetic reasoning tasks, we conducted a series of experiments on the Math Word Problem Repository [Amini et al., 2019], including AddSub [Hosseini et al., 2014], MultiArith [Roy and Roth, 2016], SingleEq [Koncel-Kedziorski et al., 2015] and SingleOp [Kushman et al., 2014]. We also included ASDiv [Miao et al., 2021], AQUA-RAT [Miao et al., 2021], GSM8K [Cobbe et al., 2021], and ASDiV [Patel et al., 2021]. For commonsense reasoning tasks, we used CommonsensQA[Talmor et al., 2018] and StrategyQA[Geva et al., 2021]. For symbolic reasoning tasks, we used Last Letter Concatenation and Coinflip[Wei et al., 2022]
2306.07932#20
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
21
Question (JSON format): { "ID": "Q1", "IQ": " ", "Q": "1) The volume of a cube with edge 2a is: A. 8a^3. B. 2a^3. C. a^3. D. 6a^3.", "C": "A", "IA": " ", "E": "The volume of a cube with edge 2a is: V=(2a)^3=8a^3.", } Pre-question (JSON format): "I want you to answer the question in the following structure: " " Choice: "A" or "B" or "C" or "D" " " Explanation: Explain the answer" " The question is: {}" New Question (Prompt): I want you to answer the question in the following structure: Choice: "A" or "B" or "C" or "D" Explanation: Explain the answer The question is: 1) The volume of a cube with edge 2a is: A. 8a^3. B. 2a^3. C. a^3. D. 6a^3." Response (JSON format): { "ID": "1", "IQ": " ", "Q": "1) The volume of a cube with edge
2306.06331#21
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
21
# 3 - T P G # 4 - T P G | | LLMs as Motion Planners LLMs as Task Planners LLMs as Translators End-to-end Motion Planning Task Planning (naive) SayCan Task Planning (feedback) No Corrections Syntax Syntax + Semantics (AutoTAMP) HouseWorld1 Soft Time Constraint 0.0% 74.0% 75.5% 79.0% 28.0% 49.0% 62.0% HouseWorld2 Hard Time Constraint 0.0% 36.0% 36.0% 40.0% 27.0% 47.0% 62.0% LLMs as Motion Planners LLMs as Task Planners LLMs as Translators End-to-end Motion Planning Task Planning (naive) Saycan Task Planning (feedback) No Corrections Syntax Syntax + Semantics (AutoTAMP) NL2TL + Syntax + Semantics 9.5% 90.0% 90.0% 92.0% 43.5% 59.5% 82.5% - 9.5% 45.0% 47.5% 49.0% 42.0% 59.0% 82.0% 83.5% 0.0% 0.0% 0.0% 0.0% 42.7% 70.0% 87.7% 86.0%
2306.06531#21
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
21
Baselines. We primarily compare MCS with the following baselines. It is noteworthy that all baselines use the same LLM as the decoder. For a fair comparison, we report the results of Self- consistency, MCS , and MCS + Self-consistency with the same 5 rationales sampled from the decoder. The details of the baselines are as follows: 1. CoT-prompting. Chain-of-thought prompting with greedy decoding [Wei et al., 2022]. 2. Self-consistency. Chain-of-thought prompting replacing the greedy decoding strategy used in CoT-prompting. Self-consistency generates a set of rationales by sampling from LLM’s decoder and determines the optimal answer by taking a majority vote [Wang et al., 2022]. Models and scales. We use GPT-3 [Ouyang et al., 2022, Brown et al., 2020b] 5 with 175-billion parameters as the LLM. More details are provided in Appendix E. For our methods, we provide the following two variants:
2306.07932#21
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
22
D. 6a^3." Response (JSON format): { "ID": "1", "IQ": " ", "Q": "1) The volume of a cube with edge 2a is: A. 8a^3. B. 2a^3. C. a^3. D. 6a^3.", "C": "A", "IA": " ", "E": "The volume of a cube with edge 2a is: V=(2a)^3=8a^3.", "CC": "A", "CE": "The formula for the volume of a cube is V = s^3, where s is the length of one of its sides. Therefore, the volume of the cube with a side length of 2a is: V = (2a)^3 = 8a^3", } Response (Word format):
2306.06331#22
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.07932
22
1. MCS . MCS is the result of manual correction for the top 40% CoT predictions ranked out using DE. A detailed analysis of the threshold of Diversity Entropy is shown in Sec. 4.4. 2. MCS +Self-consistency. MCS + Self-consistency is the result of combining marginalizing out the sampled rationales with MCS . In practice, we use Self-consistency to get answers by majority vote, and then we use MCS to manually correct incorrect sub-logics of the first rationale out of decoded rationales with DE calculated based on the decoded rationales. Sampling scheme. To sample diverse rationales, we followed similar settings to those used in Wang et al. [2022] for the open-text generation. We use T “ 0.7 without top-k truncation. For a fair comparison, we use the same prompts as in Wei et al. [2022]. The threshold of DE is set to be top 40% # 4.2 Main Results Arithmetic Reasoning The results are shown in Tab. 2. MCS generally improves the arithmetic reasoning performance at a large margin (4.68 points on average) compared with CoT. MCS +
2306.07932#22
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
23
ID IQ Q C IA E CC CE 1) The volume of a cube The formula for the volume of with edge 2a is: a cube is V = s^3, where s is The volume of a cube 1 A. 8a^3 A with edge 2a is: A the length of one of its sides. B. 2a^3. Therefore, the volume of V=(2a)^3=8a^3. C. a^3 the cube with a side length D. 6a^3. of 2a is: V = (2a)^3 = 8a^3 (M11A, M11B, M11C, M12A-M12G) and four degrees of complexity (knowledge, comprehension, application, and 6 high application). The exam aims to provide a thorough assessment of the mathematical knowledge and abilities of ChatGPT candidates by evaluating a wide range of topics. The questions are made to test ChatGPT’s understanding, application, evaluation, and analysis of mathematical concepts and principles, ensuring a thorough evaluation of its mathematical skills. This rigorous assessment makes sure that ChatGPT’s math-solving abilities are accurately measured and can be used to guide future NLP advances. # 4.1 ChatGPT score
2306.06331#23
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
23
3 - T P G LLMs as Motion Planners LLMs as Task Planners LLMs as Translators End-to-end Motion Planning Task Planning (naive) No Corrections Syntax Corrections Syntax + Semantic Corrections (AutoTAMP) Rover Overcooked Hard Time & Geometric Constraints 0.0% 0.0% 22.0% 35.0% 60.7% Wall 0.0% 13.3% 25.0% 70.0% 89.0% 0.0% 7.0% 74.0% 85.0% 89.7% 4 - T P G LLMs as Motion Planners LLMs as Task Planners LLMs as Translators End-to-end Motion Planning Task Planning (naive) No Corrections Syntax Corrections Syntax + Semantic Corrections (AutoTAMP) NL2TL + Syntax + Semantic Corrections 5.0% 17.0% 85.0% 94.0% 100.0% 100.0% 0.0% 6.0% 0.0% 47.0% 46.0% 95.0% 67.0% 95.0% 100.0% 79.0% 79.7% 100.0%
2306.06531#23
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
23
3http://www.columbia.edu/~md3405/IM_recap_1_16.pdf 4Cobb-Douglas indifference curves is what economists referred to as “well-behaved indifference curves”. Cobb-Douglas utility functions are proved useful to present algebraic examples of the economic field. 5The text-davinci-002 version is InstructGPT. We use the text-davinci-002 version of GPT-3 to finish all the experiments. 5 Self-consistency further improves the arithmetic reasoning performance (6.39 points on average). Especially for SingleEq and SVAMP, compared with CoT, the accuracy increased by 9.05 and 12.10 points, respectively. MCS + Self-Consistency performs
2306.07932#23
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06331
24
# 4.1 ChatGPT score The results of the mathematics test taken by ChatGPT from 2019 to 2023 are shown in Table 3 [28], together with the number of right answers and corresponding score for each year. A score of 5 represents an average performance on a scale from 0 to 10. These outcomes show that ChatGPT performed better than average on the math test. The ChatGPT ranges from 0 to 7 points. This outcome can be attributed to ChatGPT’s propensity to accurately respond to a significant portion of questions at the knowledge and comprehension levels, which make up 70% of the total questions. The middle-range ChatGPT score is clear from the fact that only a small number of questions at both the application and high application levels were correctly answered. Further clarification on this point will be provided in the upcoming sections. Table 3: ChatGPT’s performance in 2019-2023 Year ChatGPT’s Performance ChatGPT’s Score 2023 27/50 5.4 2022 31/50 6.2 2021 30/50 6 2020 33/50 6.6 2019 26/50 5.2 Average 147/250 5.88 # 4.2 ChatGPT’s performance in order question 100 50 0 123456789 y c a r u c c A
2306.06331#24
Investigating the Effectiveness of ChatGPT in Mathematical Reasoning and Problem Solving: Evidence from the Vietnamese National High School Graduation Examination
This study offers a complete analysis of ChatGPT's mathematics abilities in responding to multiple-choice questions for the Vietnamese National High School Graduation Examination (VNHSGE) on a range of subjects and difficulty levels. The dataset included 250 questions divided into four levels: knowledge (K), comprehension (C), application (A), and high application (H), and it included ten themes that covered diverse mathematical concepts. The outcomes demonstrate that ChatGPT's performance varies depending on the difficulty level and subject. It performed best on questions at Level (K), with an accuracy rate of $83\%$; but, as the difficulty level rose, it scored poorly, with an accuracy rate of $10\%$. The study has also shown that ChatGPT significantly succeeds in providing responses to questions on subjects including exponential and logarithmic functions, geometric progression, and arithmetic progression. The study found that ChatGPT had difficulty correctly answering questions on topics including derivatives and applications, spatial geometry, and Oxyz spatial calculus. Additionally, this study contrasted ChatGPT outcomes with Vietnamese students in VNHSGE and in other math competitions. ChatGPT dominated in the SAT Math competition with a success rate of $70\%$, followed by VNHSGE mathematics ($58.8\%)$. However, its success rates were lower on other exams, such as AP Statistics, the GRE Quantitative, AMC 10, AMC 12, and AP Calculus BC. These results suggest that ChatGPT has the potential to be an effective teaching tool for mathematics, but more work is needed to enhance its handling of graphical data and address the challenges presented by questions that are getting more challenging.
http://arxiv.org/pdf/2306.06331
Xuan-Quy Dao, Ngoc-Bich Le
cs.CL, cs.LG
17 pages, 14 images
null
cs.CL
20230610
20231031
[ { "id": "2303.08774" }, { "id": "2301.13867" }, { "id": "2305.12199" }, { "id": "2302.03494" } ]
2306.06531
24
we find that all methods using LLMs as task planners out- perform our approach; whereas our approach can fail due to translation errors, this environment permits direct trajectories between any two positions and thus lacks geometric chal- lenges that direct task planning methods will struggle with. When adding a strict time constraint (HouseWorld2), we see that such methods perform much worse while AutoTAMP’s success rate persists. For the other tasks that include geomet- ric constraints, LLM End-to-end Motion Planning and Naive Task Planning both perform quite poorly. Unsurprisingly, we observe a general trend that GPT-4 outperforms GPT-3. We find that most failures for LLM Task Planning methods result from task execution time violation and sequencing of actions for long-horizon tasks. For example, Chip’s Challenge requires the robot to efficiently collect keys for future doors. Also, the Naive Task Planning method fails to avoid collisions in the multi-agent scenarios. Failures for methods that translate to STL primarily are due to incorrect translation; while our re-prompting techniques help address this issue, there remain cases of poor translation. In Table I and Table II, we evaluate the impact of syntactic and semantic error correction on using LLMs to translate to STL. The results show that translation with no error correction has modest success across task scenarios, but both syntactic and semantic error _
2306.06531#24
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
24
AddSub MultiArith SingleEq SingleOp ASDiv AQuA SVAMP GSM8K CoT-prompting Self-consistency 82.78 90.63 93.00 94.17 85.04 89.17 94.84 95.73 73.19 77.72 40.55 38.19 68.00 75.70 56.48 58.85 MCS MCS + Self-consistency 92.15 97.22 95.50 95.50 92.51 94.09 96.62 98.75 75.52 79.63 44.09 41.34 74.60 80.10 61.56 62.92 Table 2: Arithmetic reasoning accuracy by MCS and MCS + Self-consistency compared to Chain-of- Thought prompting and Self-consistency. For each task, we report the median scores among 5 runs.
2306.07932#24
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]
2306.06531
25
_ correction significantly improve performance; this trend is present across all scenarios. We also evaluate replacing a pre- trained LLM for translation with a state-of-the-art modular translation pipeline, NL2TL, that uses a smaller LLM (T5- large) fine-tuned on a multi-domain corpus of 30K examples of instructions paired with their corresponding temporal logic expressions [21]; the error correction steps were still performed by GPT-4. Integrating NL2TL performs similarly to using a pre-trained LLM for translation, providing a modest improvement in HouseWorld2 and Rover. We note that incorporating the two re-prompting techniques for error correction is competitive with fine-tuning since we do not rely on additional data or training. 3D Simulation In supplemental videos, we demonstrate plans generated via AutoTAMP in two 3D simulated envi- ronments: a drone navigation scenario that requires reasoning about height, and a tabletop color sorting manipulation scenario. We did not incorporate the semantic check for these demos. The STL planner is directly applicable to the drone scenario using timed waypoints, as done in the 2D experiments. For manipulation tasks, we integrated a simple discrete planner to handle the dynamics mode transitions. We discuss this more in Section VII. Physical Demonstrations We demonstrate AutoTAMP on physical differential-drive robots via the remotely-accessible
2306.06531#25
AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers
For effective human-robot interaction, robots need to understand, plan, and execute complex, long-horizon tasks described by natural language. Recent advances in large language models (LLMs) have shown promise for translating natural language into robot action sequences for complex tasks. However, existing approaches either translate the natural language directly into robot trajectories or factor the inference process by decomposing language into task sub-goals and relying on a motion planner to execute each sub-goal. When complex environmental and temporal constraints are involved, inference over planning tasks must be performed jointly with motion plans using traditional task-and-motion planning (TAMP) algorithms, making factorization into subgoals untenable. Rather than using LLMs to directly plan task sub-goals, we instead perform few-shot translation from natural language task descriptions to an intermediate task representation that can then be consumed by a TAMP algorithm to jointly solve the task and motion plan. To improve translation, we automatically detect and correct both syntactic and semantic errors via autoregressive re-prompting, resulting in significant improvements in task completion. We show that our approach outperforms several methods using LLMs as planners in complex task domains. See our project website https://yongchao98.github.io/MIT-REALM-AutoTAMP/ for prompts, videos, and code.
http://arxiv.org/pdf/2306.06531
Yongchao Chen, Jacob Arkin, Charles Dawson, Yang Zhang, Nicholas Roy, Chuchu Fan
cs.RO, cs.CL, cs.HC
8 pages, 4 figures
null
cs.RO
20230610
20230927
[ { "id": "1706.06927" }, { "id": "2207.00627" }, { "id": "2305.14909" }, { "id": "2305.07766" }, { "id": "2304.11477" }, { "id": "2304.03893" }, { "id": "2204.01691" }, { "id": "2305.05658" }, { "id": "2207.05608" }, { "id": "2303.08006" }, { "id": "2305.11014" }, { "id": "2303.06247" }, { "id": "2303.14100" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "2302.05128" }, { "id": "2209.07753" } ]
2306.07932
25
Commonsense and Symbolic Reasoning Tab. 3 shows the results on commonsense and symbolic reasoning tasks. Similarly, MCS improves the performance and MCS + Self-consistency further boosts it. For symbolic reasoning, we adopt the out-of-distribution (OOD) setting where the input prompt contains samples of 4-letters and 4-flips [Wang et al., 2022] because this setting is more challenging. We do not adopt the in-distribution setting because GPT-3 can already achieve 100% accuracy with the in-distribution setting as shown in Wei et al. [2022]. Even in difficult OOD setting, the gain of MCS +Self-consistency is significant compared to CoT-prompting and Self-consistency. Model Commonsense Symbolic CSQA StraQA Letter Coinflip CoT-prompting Self-consistency 72.32 76.09 60.13 61.40 49.20 54.40 81.40 93.20 73.71 MCS + Self-consistency 77.07 MCS 60.88 62.23 75.40 78.40 81.40 93.20 Table 3: Commonsense and symbolic reasoning accuracy. For each task, we report the median scores among 5 runs.
2306.07932#25
Human-in-the-Loop through Chain-of-Thought
While the emergence of powerful language models along with Chain-of-thought prompting has made automation more and more omnipresent, it sometimes demonstrates its weakness in long-term or multi-step logical reasoning. For example, users don't always get desirable answers for complex mathematical problems without human involvement. Against this background, we present the Manual Correction System (MCS) -- a human-in-the-loop system enhanced by Chain-of-Thought prompting, which explores how manual correction of sub-logics in rationales can improve LLM's reasoning performance. Moving one step forward, considering a system with human-in-the-loop involves more than having humans improve performance but also controlling the cost. Therefore, we post a Cost-utility Analysis Model for Human-in-the-Loop systems (CAMLOP) based on classical economics theory to analyze, quantify and balance the utility and the corresponding cost. We conduct experiments of MCS and CAMLOP with twelve datasets. A significant advantage w.r.t cost and utility proves its superiority over strong baselines.
http://arxiv.org/pdf/2306.07932
Zefan Cai, Baobao Chang, Wenjuan Han
cs.CL, cs.AI
null
null
cs.CL
20230610
20230623
[ { "id": "1904.09751" }, { "id": "2110.08207" }, { "id": "2206.04615" }, { "id": "2106.15772" }, { "id": "2110.14168" }, { "id": "1805.06087" }, { "id": "1608.01413" }, { "id": "1707.02633" }, { "id": "2203.02155" }, { "id": "2103.07191" }, { "id": "1805.04833" }, { "id": "2201.11903" }, { "id": "1905.13319" }, { "id": "2203.11171" }, { "id": "2205.01068" }, { "id": "2205.11916" }, { "id": "1811.00937" } ]