id
stringlengths 12
15
| title
stringlengths 8
162
| content
stringlengths 1
17.6k
| prechunk_id
stringlengths 0
15
| postchunk_id
stringlengths 0
15
| arxiv_id
stringlengths 10
10
| references
listlengths 1
1
|
---|---|---|---|---|---|---|
2308.13724#11 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 5 _' ( ' [Question] Step 1: Preprocessing with the LLM translator Step 2: Planning with the LLM planner Step 3.2: Iterative Self- Refinement (re-planninig) Refinement (feedback from self-validator) [Few-shot Example Question] [Few-shot Example Question] __' (Few-shot Example Question] I have 3 blocks. Initially: Block b1 _, Domain file: | Block initial state: ' (Append the previous prompt to is on the table. Block b2 is on the _| (define (domain blocksworld) | (on-table b1) ' the LLM planner with the feedback table. Block b3 is on top of bt. _, (â predicates ...) 1 (on-table b2) ' obtained from the validator) Your goal is to move the blocks â _, (action pickup ...) 1 (on b3 b1) , [Few-Shot Example Question from such that they are stacked in the 1 Goal state: 1 Step 2] order: b1 on b2, b2 on b3, and b3 1) 1(on bt b2) Domain file: eatetie. 1 Problem file: [Few-shot Example Answer] _| (define (problem threeblocks) Domain file: (define (domain blocksworld) | (predicates ...) | [Few-shot Example Answer] 1 [Few-shot Example Answer from | (action pickup ...) ' We need to build the blocks from 1 Step 2] | bottom to top. [Few-shot Example Answer] _' We need to build the blocks from 4) ! Third goal: b3 on table jal: b1 on the table, b2 on the _' bottom to top. | Problem file: ' (unstack b3 b1) "table, b3 on bt ee | (define (problem threeblocks) (putdown b3) ' (unstack b2 b1) result: the action is ' =]. | 2308.13724#10 | 2308.13724#12 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#12 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | ' Second goal: b2 on b3 ' wrong since b2 is not on top of b1 | [Question from Step 2] ) (pickup b2) {analysis stops due to error | Translated PDDL domain and 1 (Stack b2 b3) \ Final answer: {problem files [Question] 1 First goal: b1 on b2 1 No, the action sequence is wrong, , I have 4 blocks. Initially: Block b1 _ (pickup b1) tit cannot accomplish the goal, _, [Feedback History from Step 3.1] is on top of b2. Block b2 is on top (stack b1 b2) 1 | (Previous feedback) of b4. Block b3 is on top ofb1. 1 [Question] ie Block b4 is on the table. Your goal 1 nitial state and goal conditions _ (latest Feedback) is to move the blocks such that 1 [Question] 1 extracted from the translated 1 The self-validation suggests an they are stacked in the order: b2 1 Translated PDDL domain and DDL files from step 1 + terror, please find a new plan. on b1, b1 on b4, b4 on b3, and b3 | problem files from step 1 1 on table. 1 lal ' ! ' 1 1 ' ! 1 ' 1 51 ' ' " ' 1 ! 1 ! 1 LU 1 LU 1 ' 1 1 1 1 ' Step 3.1: Iterative Self- 1 1 2 3 3 = 2 E 2 or BE 8 8 Figure 2: Examples of the prompts used in ISR-LLM. The prompt provided to the LLM contains two parts: the few-shot examples (shaded with a yellow color) and the actual question (blue). Details about the few-shot examples are given in Appendix A. The texts shaded with a green color represent the LLMâ s responses. The LLM translator first converts the natural language instructions into PDDL domain and problem files. Then, an initial plan is generated using the translated files, which is subsequently revised through an iterative self-refinement process. self-refinement loop persists until either the validator identifies no errors or a predefined maximum number of iterations is reached. | 2308.13724#11 | 2308.13724#13 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#13 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | The action sequence, resulting from the iterative self-refinement loop, is then accepted as the final generated action sequence. We consider two types of validators: a self-validator, which employs the LLM to assess the correctness of the generated action plan, and an external validator, which leverages external tools for performing the analysis. It is worth mentioning that, although the external validator is capable of providing accurate feedback on the feasibility of the generated plan, its implementation often demands a considerable amount of effort and may be unavailable for certain tasks. Conversely, the usage of an LLM as an internal self-validator economizes both time and effort. However, it has the inherent risk of possibly yielding imprecise or even erroneous feedback. The selection of the validator type, therefore, hinges upon the specific evaluation requirements and the context of the validation scenario. An example of the prompts provided to the LLM-based self-validator is shown in Fig. 2, where few-shot learning and CoT techniques are also employed. All examples used for the experimental domains explored in this work are given in Appendix A.3. # 5 Experimental Results To evaluate the performance of ISR-LLM in long-horizon sequential task planning, we perform experiments across three diverse planning domains. Moreover, we also investigate the influence of different LLMs on the performance of ISR-LLM, as well as the impact of the LLM translator. A detailed explanation of the experimental setup and results is provided in the following subsections. | 2308.13724#12 | 2308.13724#14 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#14 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 6 Initial State 23456 Pot 1 Pot 2 Pot 3 Goal Conditions Pot 3 (a) Cooking Initial State 4 | 4] [3] [2] # Goal Conditions 4 | 2 | # (a) Cooking (b) Blocksworld Initial State Goal Conditions (c) Ball Moving Figure 3: Three planning domains used in this work. # 5.1 Experimental Setup We utilize the following three planning domains as benchmark problems to evaluate the performance of ISR-LLM. These domains are derived from existing literature and are extensively employed in planning research Liu et al. (2023); Silver et al. (2023); Valmeekam et al. (2022); Silver et al. (2022). Detailed examples about each planning domain are presented in Appendix A. | 2308.13724#13 | 2308.13724#15 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#15 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | â ¢ Cooking: There are n pots and a total of 6 different ingredients (see Fig. 3a). The robotâ s task is to add ingredients to each pot according to a prescribed recipe. Each pot possesses its own randomly generated recipe, which stipulates the inclusion of 2 to 4 different ingredients. The robot has three actions: picking up an ingredient, putting down an ingredient, and adding the ingredient to a pot. A constraint that must be fulfilled is that each ingredient may only be retrieved once by the robot, i.e., once the robot has picked up an ingredient, it must distribute it to all pots that require this ingredient as per their individual recipes. | 2308.13724#14 | 2308.13724#16 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#16 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | â ¢ Blocksworld: There are n blocks, initially randomly placed on a table. The objective of the robot is to assemble these blocks into a stack, adhering to a specific prescribed order (see Fig. 3b). The robot has four actions: picking up a block that is on the table, putting down a block that is currently in its hand onto the table, unstacking a block from the top of another block to hold it in its hand, and stacking the block that is currently in its hand on top of another block. However, the robot can only manipulate one block at a time, i.e., any block that has other blocks situated on top of it is considered fixed. | 2308.13724#15 | 2308.13724#17 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#17 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | â ¢ Ball Moving: There are n balls, initially randomly distributed among 4 rooms (see Fig. 3c). The robot needs to relocate the balls to their predefined goal rooms, with the constraint that it can hold no more than one ball at a time. The robot has three actions: picking up a ball, putting down a ball, and moving from its current room to another room. 7 Table 1: Success rate of ISR-LLM in different planning domains. Planning domain LLM-direct GPT3.5 ISR-LLM-self ISR-LLM-external LLM-direct GPT4 ISR-LLM-self ISR-LLM-external Cooking (n = 3) Cooking (n = 4) Blocksworld (n = 3) Blocksworld (n = 4) Ball Moving (n = 3) Ball Moving (n = 4) 47% 40% 20% 10% 33% 17% 67% 53% 37% 17% 50% 27% 100% 63% 70% 53% 70% 57% 100% 100% 43% 40% 93% 90% 100% 100% 60% 60% 100% 93% 100% 100% 97% 80% 100% 97% For all three planning domains, we investigate two specific cases with n = 3 and n = 4, to examine the influence of the number of objects, which is directly correlated with the complexity of the task, on the performance of the proposed ISR-LLM framework. Furthermore, to evaluate the impacts of various LLMs on the planning outcomes, we employ two LLMs, namely GPT3.5 and GPT4, and compare their capabilities in task planning within the ISR-LLM framework. For each planning task, we evaluate three different methods: (1) LLM-direct, which is the baseline approach grounded in Silver et al. (2023, 2022); Valmeekam et al. (2022). It leverages the LLM to formulate an action plan directly from the given PDDL input. | 2308.13724#16 | 2308.13724#18 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#18 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | To ensure a fair comparison with ISR- LLM, we utilize the LLM translator to convert natural language inputs into PDDL files in this method. (2) ISR-LLM-self, which employs the ISR-LLM framework with an LLM-based self-validator; (3) ISR-LLM-external, which incorporates an external validator to generate feedback for ISR-LLM. In order to mitigate the influence of existing PDDL validators and focus on analyzing the performance of ISR-LLM, we implement our own custom external validators in this work. We randomly generate 30 unique cases with varying initial states and goal conditions for each planning task. The few-show examples used for the LLM translator, the LLM planner, and the LLM-based self-validator are given in Appendix A. All LLMâ | 2308.13724#17 | 2308.13724#19 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#19 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | s responses during the experiments are presented in our website1. The success rates of task accomplishments for the three aforementioned methods are recorded. All experiments are conducted on a laptop equipped with an Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz Processor with 8 CPUs, and an NVIDIA RTX 3080 Max-Q GPU with 16 GB VRAM. The detailed results are presented in the next subsection. # 5.2 Performance of ISR-LLM The results of the experiments are summarized in Table 1. In the cases utilizing GPT3.5, the proposed ISR-LLM framework demonstrates a notable enhancement in success rates across all planning domains when compared to the baseline approach. While the LLM-based self-validator contributes to an approximate 15% increase in performance, the external validator can further amplify the success rate by roughly 40% to 50%. The only exception occurs in the case n = 4 for the Cooking domain, where a 23% increase is observed. This might be attributed to the excessive number of required actions in this planning task, rendering LLMs less effective at correcting errors. The success rates are also influenced by task complexity, as indicated by the number of objects. Increases in object numbers correspond to decreased success rates in the Cooking, Blocksworld, and Ball Moving domains for all three approaches (LLM-direct: â 7%, â 10%, â 16%; ISR-LLM-self: â 14%, â 20%, â 23%; ISR-LLM-external:â 37%, â 17%, â 13%). | 2308.13724#18 | 2308.13724#20 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#20 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | This trend reflects the increased difficulty in rectifying erroneous actions as the planning horizon extends. Moreover, the success rate varies among planning domains. Compared to the Cooking and the Ball Moving domains, the Blocksworld domain, which demands more sophisticated logical thinking, demonstrates lower success rates. Nevertheless, the proposed ISR-LLM is still able to improve the planning outcomes within this domain. It can also be observed that GPT4 greatly outperforms GPT3.5 in long-horizon sequential task planning, corroborating the common assertion that GPT4 possesses a markedly superior reasoning capability. The baseline method, i.e., LLM-direct, when coupled with GPT4, is able to achieve a success rate exceeding 90% in the Cooking and the Ball Moving domains, where ISR-LLM also maintains this high performance level. However, in the more logically complex Blocksworld domain, GPT4 demonstrates diminished performance using the baseline approach. Nevertheless, | 2308.13724#19 | 2308.13724#21 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#21 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | # 1https://github.com/zhehuazhou/ISR-LLM 8 Table 2: Success rate of ISR-LLM with and without the LLM translator in Blocksworld domain with n = 3 and GPT3.5. Method LLM-direct ISR-LLM-self ISR-LLM-external With LLM Translator Without LLM Translator 20% 36% 70% 13% 16% 63% (a) Unstack b1 from b2 (b) Put down b1 (c) Pick up b3 (d) Stack b3 on b2 4 (e) Pick up b1 (f) Stack b1 on b3 (g) Pick up b4 (h) Stack b4 on b1 Figure 4: Grounding of actions in the Blocksworld domain with four blocks. Initially, block b2 (red), b3 (green), b4 (pink) are on the table, and block b1 (blue) is on top of block b2. The goal is to stack the blocks in the given order: b4 on b1, b1 on b3, b3 on b2, and b2 on the table. | 2308.13724#20 | 2308.13724#22 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#22 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | the employment of ISR-LLM also elevates the success rate for this domain, with the self-validator contributing an increase of about 20%, and the external validator enhancing it by more than 40%. Interestingly, the influence of the number of objects appears to be less pronounced when GPT4 is utilized. This may be attributed to GPT4â s enhanced reasoning capabilities, which facilitate more effective logical thinking, and thereby mitigate the impact of the number of objects on the results. # Influence of the LLM Translator We also evaluate the influence of the LLM translator using the Blocksworld domain with n = 3 and GPT3.5 as an example, as this case demonstrates where the efficacy of ISR-LLM is most obvious. By omitting the LLM translator and directly utilizing natural language input, we compare the success rates of task planning and present the results in Table 2. It can be observed that, while the LLM translator slightly improves the planning performance of the baseline approach, the self-validator greatly benefits from the translator, showing a 20% increase in the success rate. The reason could be that the translated PDDL files offer a symbolic and logical representation of the planning domain, thereby allowing the LLM to form a more concrete understanding of the system state, as opposed to relying solely on linguistic cues. In contrast, the performance of the external validator remains relatively consistent, irrespective of the presence of the LLM translator. This consistency arises from our custom validatorâ s ability to provide accurate feedback, whether PDDL formulations are employed or not. However, as previously mentioned, introducing translated PDDL files enables the usage of existing PDDL validators, potentially saving substantial time and effort needed for implementing a custom validator. | 2308.13724#21 | 2308.13724#23 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#23 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 9 # 5.4 Grounding the Actions Although it is beyond the scope of this work, we further demonstrate that the generated action plan can be directly grounded into feasible robot actions when paired with a suitable motion planner. This highlights another advantage of employing the LLM translator within the ISR-LLM framework, as the use of PDDL formulation ensures that each generated action conforms to a predefined definition and structure. Consequently, this simplifies the task of the motion planner in converting the action plan into executable robot movements. Figure 4 illustrates this grounding process, using an example from the Blocksworld domain with four blocks. Here, a pick-and-place controller is employed to execute the four different types of actions, assuming the robot knows the locations of the blocks. The simulation is conducted in NVIDIA Omniverse Isaac Sim2. | 2308.13724#22 | 2308.13724#24 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#24 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | # 6 Discussion Self-Validator and External Validator Generally, the external validator is capable of providing feedback to a degree of precision that identifies the exact action in which an error resides. Conversely, the self-validator usually only provides an overarching estimation regarding the correctness of the entire generated action plan. As a consequence, the external validator often leads to superior performance, as precise feedback can greatly facilitate the correction of erroneous actions. This benefit becomes more obvious as the planning horizon extends, or when complex logical thinking is demanded. However, as aforementioned, the external validator requires additional design and implementation effort. In contrast, the self-validator is advantageous in that it can be easily and directly employed without necessitating extra work. Therefore, the selection between these validator types should be carefully considered in light of the specific task requirements and the resources available. Planning Domains The planning capabilities of LLMs are influenced by the inherent characteristics of the planning domains. As observed from our experimental results, LLMs appear to excel in planning tasks that focus on adhering to specific instructions, such as Cooking, or performing repeated actions with identifiable patterns, e.g., Ball Moving. Conversely, when the planning tasks demand more complex logical thinking, as seen in the Blocksworld domain, their planning performance tends to diminish. This phenomenon is more pronounced in the GPT4 cases. The underlying reason could be that LLMs are essentially trained to generate word sequences that mirror human-like thought processes, which suits tasks requiring instruction or pattern following. However, when critical logical reasoning becomes a vital component of the task, the inherent reasoning abilities of the LLMs become more important. This suggests that enhancing the reasoning capabilities of LLMs could be a priority when aiming to utilize them as planners for more intricate planning tasks. Limitations One limitation of the current LLM-based planners - even with the proposed ISR- LLM framework - is that the overall success rate often fails to exceed that of traditional search- based planners. However, as an initial exploratory work, we demonstrate the potential of utilizing LLM as a versatile and task-agnostic planner. This could significantly facilitate the deployment of various robotic systems across diverse scenarios and minimize the required effort in planning system design. Moreover, the planning abilities of the ISR-LLM framework may see substantial improvements through refinements in the underlying reasoning capabilities of the LLMs. | 2308.13724#23 | 2308.13724#25 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#25 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | This could be potentially achieved through parameter fine-tuning technologies, such as integrating a fine-tuned LLM specifically designed for task planning. Another limitation stems from the inherent randomness within LLMs, complicating assurances such as correctness or constraint satisfaction in the generated action plan. Therefore, the employment of LLMs may be inappropriate for certain tasks, especially those that are safety-critical. # 7 Conclusion In this paper, we explore the potential of leveraging LLMs for long-horizon sequential task planning based on natural language input. To improve the correctness of the generated action plan, we introduce the ISR-LLM framework, which employs an iterative self-refinement approach for automatic plan # 2https://developer.nvidia.com/isaac-sim | 2308.13724#24 | 2308.13724#26 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#26 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 10 revisions. This framework consists of three steps. First, an LLM translator converts the natural language input into a PDDL formulation, represented by PDDL files. Second, using these translated PDDL files, an LLM planner formulates an initial action plan. Third, an iterative self-refinement loop is initiated, wherein either an LLM-based self-validator or an external validator provides feedback on the correctness of the action plan, allowing the LLM planner to make necessary revisions to the action plan. Through extensive experiments across three diverse planning domains, we demonstrate that ISR-LLM surpasses the performance of existing state-of-the-art LLM-based planners in long- horizon sequential task planning. While maintaining the flexibility and generalizability to work with natural language input, our ISR-LLM framework consistently achieves high success rates in task accomplishments. For future work, we plan to incorporate motion planning within the ISR-LLM framework, aiming to facilitate reliable and efficient task and motion planning across various robotic application scenarios. | 2308.13724#25 | 2308.13724#27 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#27 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | # References Constructions Aeronautiques, Adele Howe, Craig Knoblock, ISI Drew McDermott, Ashwin Ram, Manuela Veloso, Daniel Weld, David Wilkins SRI, Anthony Barrett, Dave Christianson, et al. 1998. Pddl| the planning domain definition language. Technical Report, Tech. Rep. (1998). Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689 (2022). Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. 2022. | 2308.13724#26 | 2308.13724#28 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#28 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022). Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Gerhard Brewka, Thomas Eiter, and MirosŠaw Truszczy´nski. 2011. | 2308.13724#27 | 2308.13724#29 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#29 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer set programming at a glance. Commun. ACM 54, 12 (2011), 92â 103. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â | 2308.13724#28 | 2308.13724#30 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#30 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 1901. Benjamin J Cohen, Sachin Chitta, and Maxim Likhachev. 2010. Search-based planning for manipula- tion with motion primitives. In 2010 IEEE international conference on robotics and automation. IEEE, 2902â 2908. Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 1173â | 2308.13724#29 | 2308.13724#31 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#31 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 1178. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Yan Ding, Xiaohan Zhang, Chris Paxton, and Shiqi Zhang. 2023. Task and motion planning with large language models for object rearrangement. arXiv preprint arXiv:2303.06247 (2023). Yan Ding, Xiaohan Zhang, Xingyue Zhan, and Shiqi Zhang. 2020. Task-motion planning for safe and efficient urban driving. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2119â 2125. | 2308.13724#30 | 2308.13724#32 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#32 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Danny Driess, Ozgur Oguz, and Marc Toussaint. 2019. Hierarchical task and motion planning using logic-geometric programming (hlgp). In RSS Workshop on Robust Task and Motion Planning. Stefan Edelkamp and Stefan Schrödl. 2011. Heuristic search: theory and applications. Elsevier. 11 Maria Fox and Derek Long. 2003. PDDL2. 1: An extension to PDDL for expressing temporal planning domains. Journal of artificial intelligence research 20 (2003), 61â | 2308.13724#31 | 2308.13724#33 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#33 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 124. Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. 2021. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems 4 (2021), 265â 293. Caelan Reed Garrett, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2020. Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 30. 440â 448. Valentin N Hartmann, Andreas Orthey, Danny Driess, Ozgur S Oguz, and Marc Toussaint. 2022. | 2308.13724#32 | 2308.13724#34 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#34 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics 39, 1 (2022), 239â 252. Patrik Haslum, Nir Lipovetzky, Daniele Magazzeni, Christian Muise, Ronald Brachman, Francesca Rossi, and Peter Stone. 2019. An introduction to the planning domain definition language. Vol. 13. Springer. Malte Helmert. 2006. The fast downward planning system. Journal of Artificial Intelligence Research 26 (2006), 191â 246. Richard Howey, Derek Long, and Maria Fox. 2004. VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL. In 16th IEEE International Conference on Tools with Artificial Intelligence. IEEE, 294â 301. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022b. Large language models can self-improve. arXiv preprint arXiv:2210.11610 (2022). Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022a. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning. PMLR, 9118â 9147. Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. 2022c. | 2308.13724#33 | 2308.13724#35 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#35 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608 (2022). Yu-qian Jiang, Shi-qi Zhang, Piyush Khandelwal, and Peter Stone. 2019. Task planning in robotics: an empirical comparison of pddl-and asp-based systems. Frontiers of Information Technology & Electronic Engineering 20 (2019), 363â 373. Hector J Levesque. 2005. | 2308.13724#34 | 2308.13724#36 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#36 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Planning with loops. In IJCAI. 509â 515. John Levine and David Humphreys. 2003. Learning action strategies for planning domains using genetic programming. In Workshops on Applications of Evolutionary Computation. Springer, 684â 695. Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, et al. 2022. | 2308.13724#35 | 2308.13724#37 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#37 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Pre-trained language models for interactive decision-making. Advances in Neural Information Processing Systems 35 (2022), 31199â 31212. Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, and Jeannette Bohg. 2023. Text2motion: From natural language instructions to feasible plans. arXiv preprint arXiv:2303.12153 (2023). Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477 (2023). Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 (2023). Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019). | 2308.13724#36 | 2308.13724#38 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#38 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 12 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485â | 2308.13724#37 | 2308.13724#39 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#39 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 5551. Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. 2022. Planning with large language models via corrective re-prompting. arXiv preprint arXiv:2211.09935 (2022). Diederik M Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. 2013. A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research 48 (2013), 67â 113. Javier Segovia-Aguas, Sergio Jiménez, and Anders Jonsson. 2021. | 2308.13724#38 | 2308.13724#40 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#40 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Generalized planning as heuristic search. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 31. 569â 577. Pratyusha Sharma, Antonio Torralba, and Jacob Andreas. 2021. Skill induction and planning with latent language. arXiv preprint arXiv:2110.01517 (2021). Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B Tenenbaum, Leslie Pack Kaelbling, and Michael Katz. 2023. Generalized Planning in PDDL Domains with Pretrained Large Language Models. arXiv preprint arXiv:2305.11014 (2023). Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2022. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop. Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. 2023. | 2308.13724#39 | 2308.13724#41 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#41 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 11523â 11530. John Slaney and Sylvie Thiébaux. 2001. Blocks world revisited. Artificial Intelligence 125, 1-2 (2001), 119â 153. Marc Toussaint. 2015. Logic-Geometric Programming: An Optimization-Based Approach to Com- bined Task and Motion Planning.. In IJCAI. 1930â 1936. | 2308.13724#40 | 2308.13724#42 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#42 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. 2022. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts. 1â 7. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large Language Models Still Canâ t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). arXiv preprint arXiv:2206.10498 (2022). Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824â 24837. | 2308.13724#41 | 2308.13724#43 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#43 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Elly Zoe Winner. 2008. Learning domain-specific planners from example plans. Ph. D. Dissertation. Carnegie Mellon University. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems 35 (2022), 15476â 15488. Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, et al. 2022. | 2308.13724#42 | 2308.13724#44 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#44 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598 (2022). 13 Shiqi Zhang, Fangkai Yang, Piyush Khandelwal, and Peter Stone. 2015. Mobile Robot Planning Using Action Language with an Abstraction Hierarchy. In International Conference on Logic Programming and Nonmonotonic Reasoning. Springer, 502â 516. Weixiong Zhang. 1999. | 2308.13724#43 | 2308.13724#45 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#45 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | State-space search: Algorithms, complexity, extensions, and applications. Springer Science & Business Media. Tan Zhi-Xuan. 2022. PDDL. jl: An Extensible Interpreter and Compiler Interface for Fast and Flexible AI Planning. Ph. D. Dissertation. Massachusetts Institute of Technology. 14 # A Prompt Few-Shot Examples # A.1 Translator Cooking: For the Cooking planning domain, we use a total of three few-shot examples, which are given as follows. Example 1 Question: | 2308.13724#44 | 2308.13724#46 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#46 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | I have 3 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient1, ingredient4. pot2 contains ingredient2, ingredient3, ingredient5, ingredient6. pot3 contains ingredient1, ingredient4, ingredient5. Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem threepots) (:domain cooking) (:objects pot1 pot2 pot3 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) ) (:goal (and (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot2 ingredient6) | 2308.13724#45 | 2308.13724#47 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#47 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 15 (contain pot3 ingredient1) (contain pot3 ingredient4) (contain pot3 ingredient5)) ) Example 2 Question: I have 4 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient2, ingredient3, ingredient6. pot2 contains ingredient1, ingredient2, ingredient5. pot3 contains ingredient5, ingredient6. pot4 contains ingredient2, ingredient4, ingredient6. | 2308.13724#46 | 2308.13724#48 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#48 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fourpots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) ) (:goal (and (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient6) (contain pot2 ingredient1) (contain pot2 ingredient2) (contain pot2 ingredient5) | 2308.13724#47 | 2308.13724#49 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#49 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 16 (contain pot3 ingredient5) (contain pot3 ingredient6) (contain pot4 ingredient2) (contain pot4 ingredient4) (contain pot4 ingredient6)) ) # Example 3 Question: I have 5 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient3, ingredient4, ingredient5. pot2 contains ingredient1, ingredient2, ingredient3, ingredient6. pot3 contains ingredient2, ingredient3, ingredient5. pot4 contains ingredient1, ingredient6. pot5 contains ingredient1, ingredient3, ingredient4, ingredient5. | 2308.13724#48 | 2308.13724#50 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#50 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fivepots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 pot5 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) (pot-empty pot5) ) (:goal (and (contain pot1 ingredient3) (contain pot1 ingredient4) 17 (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient6) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) (contain pot4 ingredient1) (contain pot4 ingredient6) (contain pot5 ingredient1) (contain pot5 ingredient3) (contain pot5 ingredient4) (contain pot5 ingredient5)) ) | 2308.13724#49 | 2308.13724#51 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#51 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Ball Moving: For the Ball Moving planning domain, we use a total of three few-shot examples, which are given as follows. Example 1 Question: I have 3 balls within 4 rooms. Initially: Robot is in room2. Ball ball1 is in room3. Ball ball2 is in room2. Ball ball3 is in room4. Your goal is to move the balls to specific rooms: ball1 in room1, ball2 in room2, and ball3 in room3. | 2308.13724#50 | 2308.13724#52 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#52 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem threeballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 - ball) (:init (arm-empty) (robot-at robot1 room2) | 2308.13724#51 | 2308.13724#53 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#53 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 18 (at ball1 room3) (at ball2 room2) (at ball3 room4) ) (:goal (and (at ball1 room1) (at ball2 room2) (at ball3 room3)) ) Example 2 Question: I have 4 balls within 4 rooms. Initially: Robot is in room3. Ball ball1 is in room1. Ball ball2 is in room3. Ball ball3 is in room1. Ball ball4 is in room2. Your goal is to move the balls to specific rooms: ball1 in room3, ball2 in room2, ball3 in room4, and ball4 in room4. Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fourballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 - ball) (:init (arm-empty) (robot-at robot1 room3) (at ball1 room1) (at ball2 room3) (at ball3 room1) (at ball4 room2) ) (:goal (and | 2308.13724#52 | 2308.13724#54 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#54 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 19 (at ball1 room3) (at ball2 room2) (at ball3 room4) (at ball4 room4)) ) Example 3 Question: I have 5 balls within 4 rooms. Initially: Robot is in room2. Ball ball1 is in room1. Ball ball2 is in room2. Ball ball3 is in room4. Ball ball4 is in room3. Ball ball5 is in room4. Your goal is to move the balls to specific rooms: ball1 in room1, ball2 in room1, ball3 in room4, ball4 in room2, and ball5 in room1. | 2308.13724#53 | 2308.13724#55 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#55 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fiveballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 ball5 - ball) (:init (arm-empty) (robot-at robot1 room2) (at ball1 room1) (at ball2 room2) (at ball3 room4) (at ball4 room3) (at ball5 room4) ) (:goal (and (at ball1 room1) (at ball2 room1) (at ball3 room4) | 2308.13724#54 | 2308.13724#56 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#56 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 20 (at ball4 room2) (at ball5 room1)) ) Blocksworld: For the Blocksworld planning domain, we use a total of three few-shot examples, which are given as follows. Example 1 Question: I have 3 blocks. Initially: Block b1 is on the table. Block b2 is on the table. Block b3 is on top of b1. Your goal is to move the blocks such that they are stacked in the order: b1 on b2, b2 on b3, and b3 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on-table b2) (on b3 b1) (clear b2) (clear b3) | 2308.13724#55 | 2308.13724#57 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#57 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 21 ) (:goal (and (on b1 b2) (on b2 b3) (on-table b3)) ) # Example 2 Question: I have 4 blocks. Initially: Block b1 is on the table. Block b2 is on top of b4. Block b3 is on top of b1. Block b4 is on the table. Your goal is to move the blocks such that they are stacked in the order: b3 on b2, b2 on b1, b1 on b4, and b4 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fourblocks) (:domain blocksworld) (:objects b1 b2 b3 b4) (:init (arm-empty) (on-table b1) (on b2 b4) (on b3 b1) (on-table b4) | 2308.13724#56 | 2308.13724#58 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#58 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 22 (clear b2) (clear b3) ) (:goal (and (on b3 b2) (on b2 b1) (on b1 b4) (on-table b4)) ) # Example 3 Question: I have 5 blocks. Initially: Block b1 is on the table. Block b2 is on the table. Block b3 is on top of b2. Block b4 is on the table. Block b5 is on top of b4. Your goal is to move the blocks such that they are stacked in the order: b3 on b1, b1 on b4, b4 on b2, b2 on b5, and b5 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fiveblocks) (:domain blocksworld) (:objects b1 b2 b3 b4 b5) (:init (arm-empty) | 2308.13724#57 | 2308.13724#59 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#59 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 23 (on-table b1) (on-table b2) (on b3 b2) (on-table b4) (on b5 b4) (clear b1) (clear b3) (clear b5) ) (:goal (and (on b3 b1) (on b1 b4) (on b4 b2) (on b2 b5) (on-table b5)) ) # A.2 Planner Cooking: For the Cooking planning domain, we use a total of three few-shot examples, which are given as follows. Example 1 Question: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem threepots) (:domain cooking) (:objects pot1 pot2 pot3 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init 24 (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) ) (:goal (and (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient4) (contain pot3 ingredient5)) ) Answer: | 2308.13724#58 | 2308.13724#60 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#60 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Action Sequence: Step 1: ingredient1 is needed in pot1, pot3 (pick ingredient1) (add ingredient1 pot1) (add ingredient1 pot3) (putdown ingredient1) Step 2: ingredient2 is needed in pot2 (pick ingredient2) (add ingredient2 pot2) (putdown ingredient2) Step 3: ingredient3 is needed in pot2 (pick ingredient3) (add ingredient3 pot2) (putdown ingredient3) Step 4: ingredient4 is needed in pot1, pot3 (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) Step 5: ingredient5 is needed in pot2, pot3 (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) Step 6: ingredient6 is needed in pot2 (pick ingredient6) (add ingredient6 pot2) (putdown ingredient6) All ingredients have been added to the corresponding pots. # Example 2 Question: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) 25 | 2308.13724#59 | 2308.13724#61 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#61 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fourpots) (:domain cooking) (:objects pot1 pot2 po3 pot4 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) ) (:goal (and (contain pot1 ingredient2) (contain pot1 ingredient6) (contain pot2 ingredient1) (contain pot2 ingredient2) (contain pot2 ingredient5) (contain pot3 ingredient5) (contain pot3 ingredient6) (contain pot4 ingredient2) (contain pot4 ingredient4) (contain pot4 ingredient6)) ) Answer: Action Sequence: Step 1: ingredient1 is needed in pot2 (pick ingredient1) (add ingredient1 pot2) (putdown ingredient1) Step 2: ingredient2 is needed in pot1, pot2, pot4 (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (add ingredient2 pot4) (putdown ingredient2) | 2308.13724#60 | 2308.13724#62 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#62 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 26 Step 3: ingredient3 is not needed in any pot no action needed Step 4: ingredient4 is needed in pot4 (pick ingredient4) (add ingredient4 pot4) (putdown ingredient4) Step 5: ingredient5 is needed in pot2, pot3 (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) Step 6: ingredient6 is needed in pot1, pot3, pot4 (pick ingredient6) (add ingredient6 pot1) (add ingredient6 pot3) (add ingredient6 pot4) (putdown ingredient6) All ingredients have been added to the corresponding pots. # Example 3 Question: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fivepots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 pot5 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) 27 | 2308.13724#61 | 2308.13724#63 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#63 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | (pot-empty pot4) (pot-empty pot5) ) (:goal (and (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient5) (contain pot2 ingredient1) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient6) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) (contain pot4 ingredient1) (contain pot4 ingredient6) (contain pot5 ingredient1) (contain pot5 ingredient3) (contain pot5 ingredient4) (contain pot5 ingredient5)) ) Answer: Action Sequence: Step 1: ingredient1 is needed in pot2, pot4, pot5 (pick ingredient1) (add ingredient1 pot2) (add ingredient1 pot4) (add ingredient1 pot5) (putdown ingredient1) Step 2: ingredient2 is needed in pot2, pot3 (pick ingredient2) (add ingredient2 pot2) (add ingredient2 pot3) (putdown ingredient2) Step 3: ingredient3 is needed in pot1, pot2, pot3, pot5 (pick ingredient3) (add ingredient3 pot1) (add ingredient3 pot2) (add ingredient3 pot3) (add ingredient3 pot5) (putdown ingredient3) Step 4: ingredient4 is needed in pot1, pot5 (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot5) (putdown ingredient4) Step 5: ingredient5 is needed in pot1, pot3, pot5 (pick ingredient5) (add ingredient5 pot1) (add ingredient5 pot3) (add ingredient5 pot5) (putdown ingredient5) Step 6: ingredient6 is needed in pot2, pot4 (pick ingredient6) (add ingredient6 pot2) | 2308.13724#62 | 2308.13724#64 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#64 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 28 (add ingredient6 pot4) (putdown ingredient6) All ingredients have been added to the corresponding pots. Ball Moving: For the Ball Moving planning domain, we use a total of three few-shot examples, which are given as follows. # Example 1 Question: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem threeballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 - ball) (:init (arm-empty) (robot-at robot1 room2) (at ball1 room3) (at ball2 room2) (at ball3 room4) ) (:goal (and (at ball1 room1) (at ball2 room2) (at ball3 room3)) ) # Answer: Action Sequence: Step 1: | 2308.13724#63 | 2308.13724#65 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#65 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 29 robot1 is now in room2. (at ball1 room3) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room3) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball1 (move robot1 room2 room3) (pick ball1 room3) (move robot1 room3 room1) (drop ball1 room1) Step 2: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room3) ball1 is satisfied, ball2 is satisfied, ball3 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball3 (move robot1 room1 room4) (pick ball3 room4) (move robot1 room4 room3) (drop ball3 room3) Step 3: robot1 is now in room3. (at ball1 room1) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room2) (at ball3 room3) with goal (at ball3 room3) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied. All balls are satisfied. | 2308.13724#64 | 2308.13724#66 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#66 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | # Example 2 Question: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) | 2308.13724#65 | 2308.13724#67 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#67 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 30 ) Problem file: (define (problem fourballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 - ball) (:init (arm-empty) (robot-at robot1 room3) (at ball1 room1) (at ball2 room3) (at ball3 room1) (at ball4 room2) ) (:goal (and (at ball1 room3) (at ball2 room2) (at ball3 room4) (at ball4 room4)) ) Answer: | 2308.13724#66 | 2308.13724#68 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#68 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Action Sequence: Step 1: robot1 is now in room3. (at ball1 room1) with goal (at ball1 room3) (at ball2 room3) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is unsatisfied, ball3 is unsatisfied, ball4 is unsatisfied. ball2 is in the same room with the robot1 and is unsatisfied. Go to complete ball2 (pick ball2 room3) (move robot1 room3 room2) (drop ball2 room2) Step 2: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is unsatisfied. ball4 is in the same room with the robot1 and is unsatisfied. Go to complete ball4 (pick ball4 room2) (move robot1 room2 room4) (drop ball4 room4) Step 3: robot1 is now in room4. (at ball1 room1) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 is unsatisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is satisfied. All balls that are in the same room with robot1 are satisfied. | 2308.13724#67 | 2308.13724#69 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#69 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 31 Go to complete next unsatisfied ball (from first to last): ball1 (move robot1 room4 room1) (pick ball1 room1) (move robot1 room1 room3) (drop ball1 room3) Step 4: robot1 is now in room3. (at ball1 room3) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room1) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 is satisfied, ball2 is satisfied, ball3 is unsatisfied, ball4 is satisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball3 (move robot1 room3 room1) (pick ball3 room1) (move robot1 room1 room4) (drop ball3 room4) Step 5: robot1 is now in room4. (at ball1 room3) with goal (at ball1 room3) (at ball2 room2) with goal (at ball2 room2) (at ball3 room4) with goal (at ball3 room4) (at ball4 room4) with goal (at ball4 room4) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is satisfied. All balls are satisfied. # Example 3 Question: | 2308.13724#68 | 2308.13724#70 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#70 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fiveballs) (:domain ballmoving) 32 (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 ball5 - ball) (:init (arm-empty) (robot-at robot1 room2) (at ball1 room1) (at ball2 room2) (at ball3 room4) (at ball4 room3) (at ball5 room4) ) (:goal (and (at ball1 room1) (at ball2 room1) (at ball3 room4) (at ball4 room2) (at ball5 room1)) ) Answer: | 2308.13724#69 | 2308.13724#71 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#71 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Action Sequence: Step 1: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room1) (at ball2 room2) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room3) with goal (at ball4 room2) (at ball5 room4) with goal (at ball5 room1) ball1 is satisfied, ball2 is unsatisfied, ball3 is satisfied, ball4 is unsatisfied, ball5 is unsatisfied. ball2 is in the same room with the robot1 and is unsatisfied. Go to complete ball2 (pick ball2 room2) (move robot1 room2 room1) (drop ball2 room1) Step 2: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room3) with goal (at ball4 room2) (at ball5 room4) with goal (at ball5 room1) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is unsatisfied, ball5 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball4 (move robot1 room1 room3) (pick ball4 room3) (move robot1 room3 room2) (drop ball4 room2) Step 3: robot1 is now in room2. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room2) (at ball5 room4) with goal (at ball5 room1) | 2308.13724#70 | 2308.13724#72 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#72 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 33 ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is satisfied, ball5 is unsatisfied. All balls that are in the same room with robot1 are satisfied. Go to complete next unsatisfied ball (from first to last): ball5 (move robot1 room2 room4) (pick ball5 room4) (move robot1 room4 room1) (drop ball5 room1) Step 4: robot1 is now in room1. (at ball1 room1) with goal (at ball1 room1) (at ball2 room1) with goal (at ball2 room1) (at ball3 room4) with goal (at ball3 room4) (at ball4 room2) with goal (at ball4 room2) (at ball5 room1) with goal (at ball5 room1) ball1 is satisfied, ball2 is satisfied, ball3 is satisfied, ball4 is satisfied, ball5 is satisfied. All balls are satisfied. Blocksworld: For the Blocksworld planning domain, we use a total of five few-shot examples, which are given as follows. Example 1 Question: | 2308.13724#71 | 2308.13724#73 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#73 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: 34 (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on-table b2) (on b3 b1) (clear b2) (clear b3) ) (:goal (and (on b1 b2) (on b2 b3) (on-table b3)) ) Answer: | 2308.13724#72 | 2308.13724#74 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#74 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table (unstack b3 b1) (putdown b3) Second goal: b2 on b3 (pickup b2) (stack b2 b3) First goal: b1 on b2 (pickup b1) (stack b1 b2) Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) | 2308.13724#73 | 2308.13724#75 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#75 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 35 :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fourblocks) (:domain blocksworld) (:objects b1 b2 b3 b4) (:init (arm-empty) (on-table b1) (on b2 b4) (on b3 b1) (on-table b4) (clear b2) (clear b3) ) (:goal (and (on b3 b2) (on b2 b1) (on b1 b4) (on-table b4)) ) Answer: Action Sequence: We need to build the blocks from bottom to top. Fourth goal: b4 on table Already satisfied in initial configuration Third goal: b1 on b4 (unstack b2 b4) (putdown b2) (unstack b3 b1) (putdown b3) (pickup b1) (stack b1 b4) Second goal: b2 on b1 (pickup b2) (stack b2 b1) First goal: b3 on b2 (pickup b3) (stack b3 b2) # Example 3 Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) | 2308.13724#74 | 2308.13724#76 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#76 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 36 (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fiveblocks) (:domain blocksworld) (:objects b1 b2 b3 b4 b5) (:init (arm-empty) (on-table b1) (on-table b2) (on b3 b2) (on-table b4) (on b5 b4) (clear b1) (clear b3) (clear b5) ) (:goal (and (on b3 b1) (on b1 b4) (on b4 b2) (on b2 b5) (on-table b5)) ) Answer: Action Sequence: We need to build the blocks from bottom to top. Fifth goal: b5 on table | 2308.13724#75 | 2308.13724#77 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#77 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 37 (unstack b5 b4) (putdown b5) Fourth goal: b2 on b5 (unstack b3 b2) (putdown b3) (pickup b2) (stack b2 b5) Third goal: b4 on b2 (pickup b4) (stack b4 b2) Second goal: b1 on b4 (pickup b1) (stack b1 b4) First goal: b3 on b1 (pickup b3) (stack b3 b1) # Example 4 Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) | 2308.13724#76 | 2308.13724#78 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#78 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 38 (:init (arm-empty) (on b1 b3) (on-table b2) (on-table b3) (clear b1) (clear b2) ) (:goal (and (on b2 b1) (on b1 b3) (on-table b3)) ) Answer: Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table Already satisfied in initial configuration Second goal: b1 on b3 Already satisfied in initial configuration First goal: b2 on b1 (pickup b2) (stack b2 b1) # Example 5 Question: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack 39 :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on b2 b3) (on-table b3) (clear b1) (clear b2) ) (:goal (and (on b2 b1) (on b1 b3) (on-table b3)) ) | 2308.13724#77 | 2308.13724#79 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#79 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Answer: Action Sequence: We need to build the blocks from bottom to top. Third goal: b3 on table Already satisfied in initial configuration Second goal: b1 on b3 (unstack b2 b3) (putdown b2) (pickup b1) (stack b1 b3) First goal: b2 on b1 (pickup b2) (stack b2 b1) # A.3 Self-Validator Cooking: For the Cooking planning domain, we use a total of four few-shot examples, which are given as follows. Example 1 Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5) | 2308.13724#78 | 2308.13724#80 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#80 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 40 (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient4) (contain pot3 ingredient5) Examined action sequence: (pick ingredient1) (add ingredient1 pot1) (add ingredient1 pot3) (putdown ingredient1) (pick ingredient2) (add ingredient1 pot2) (putdown ingredient2) (pick ingredient3) (add ingredient3 pot2) (putdown ingredient3) (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot2) (putdown ingredient6) Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient1) result: ingredient1 in hand (add ingredient1 pot1) result: pot1: ingredient1 pot2: empty pot3: empty (add ingredient1 pot3) result: pot1: ingredient1 pot2: empty pot3: ingredient1 (putdown ingredient1) result: hand empty ingredient has been picked: ingredient1 (pick ingredient2) result: ingredient2 in hand (add ingredient1 pot2) result: action is wrong since currently ingredient2 is in hand instead of ingredient1 Final answer: No, the action sequence is wrong, it cannot accomplish the goal. Example 2 Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2) | 2308.13724#79 | 2308.13724#81 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#81 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 41 (pot-empty pot3) Goal state: (contain pot1 ingredient1) (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient6) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) Examined action sequence: (pick ingredient1) (add ingredient1 pot1) (putdown ingredient1) (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (add ingredient2 pot3) (putdown ingredient2) (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot3) (putdown ingredient2) (pick ingredient4) (add ingredient4 pot2) (putdown ingredient4) (pick ingredient6) (add ingredient6 pot1) (putdown ingredient6) Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient1) result: ingredient1 in hand (add ingredient1 pot1) result: pot1: ingredient1 pot2: empty pot3: empty (putdown ingredient1) result: hand empty ingredient has been picked: ingredient1 (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient1, ingredient2 pot2: empty pot3: empty (add ingredient2 pot2) result: pot1: ingredient1, ingredient2 pot2: ingredient2 pot3: empty (add ingredient2 pot3) result: pot1: ingredient1, ingredient2 pot2: ingredient2 pot3: ingredient2 | 2308.13724#80 | 2308.13724#82 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#82 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 42 (putdown ingredient2) result: hand empty ingredient has been picked: ingredient1, ingredient2 (pick ingredient2) result: action is wrong since ingredient2 has already been picked Final answer: No, the action sequence is wrong, it cannot accomplish the goal. # Example 3 Question: Initial state: (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient2) (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient5) Examined action sequence: (pick ingredient2) (add ingredient2 pot1) (add ingredient2 pot2) (putdown ingredient2) (pick ingredient4) (add ingredient4 pot2) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot1) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot2) (putdown ingredient6) Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient2 pot2: empty pot3: empty (add ingredient2 pot2) result: pot1: ingredient2 pot2: ingredient2 pot3: empty (putdown ingredient2) result: hand empty | 2308.13724#81 | 2308.13724#83 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#83 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 43 ingredient has been picked: ingredient2 (pick ingredient4) result: ingredient4 in hand (add ingredient4 pot2) result: pot1: ingredient2 pot2: ingredient2, ingredient4 pot3: empty (putdown ingredient4) result: hand empty ingredient has been picked: ingredient2, ingredient4 (pick ingredient5) result: ingredient5 in hand (add ingredient5 pot1) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4 pot3: empty (add ingredient5 pot2) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5 pot3: empty (add ingredient5 pot3) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5 pot3: ingredient5 (putdown ingredient5) result: hand empty ingredient has been picked: ingredient2, ingredient4, ingredient5 (pick ingredient6) result: ingredient6 in hand (add ingredient6 pot2) result: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5, ingredient6 pot3: ingredient5 (putdown ingredient6) result: hand empty ingredient has been picked: ingredient2, ingredient4, ingredient5, ingredient6 No error found in action sequence. Goal state check: final state: pot1: ingredient2, ingredient5 pot2: ingredient2, ingredient4, ingredient5, ingredient6 pot3: ingredient5 goal state: (contain pot1 ingredient2) (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient4) (contain pot2 ingredient5) (contain pot2 ingredient6) (contain pot3 ingredient1) (contain pot3 ingredient5) The final state is not the same as the goal state. pot3 should contain ingredient1. Goal is not accomplished. Final answer: No, the action sequence is wrong, it cannot accomplish the goal. pot3 should contain ingredient1. Example 4 Question: Initial state: (arm-empty) (pot-empty pot1) 44 | 2308.13724#82 | 2308.13724#84 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#84 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | (pot-empty pot2) (pot-empty pot3) Goal state: (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient6) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot3 ingredient4) (contain pot3 ingredient5) Examined action sequence: (pick ingredient2) (add ingredient2 pot1) (putdown ingredient2) (pick ingredient3) (add ingredient3 pot1) (add ingredient3 pot2) (putdown ingredient3) (pick ingredient4) (add ingredient4 pot1) (add ingredient4 pot3) (putdown ingredient4) (pick ingredient5) (add ingredient5 pot2) (add ingredient5 pot3) (putdown ingredient5) (pick ingredient6) (add ingredient6 pot1) (putdown ingredient6) Answer: Analysis: Initial state: pot1: empty pot2: empty pot3: empty (pick ingredient2) result: ingredient2 in hand (add ingredient2 pot1) result: pot1: ingredient2 pot2: empty pot3: empty (putdown ingredient2) result: hand empty ingredient has been picked: ingredient2 (pick ingredient3) result: ingredient3 in hand (add ingredient3 pot1) result: pot1: ingredient2, ingredient3 pot2: empty pot3: empty (add ingredient3 pot2) result: pot1: ingredient2, ingredient3 pot2: ingredient3 pot3: empty (putdown ingredient3) result: hand empty ingredient has been picked: ingredient2, ingredient3 (pick ingredient4) result: ingredient4 in hand (add ingredient4 pot1) result: | 2308.13724#83 | 2308.13724#85 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#85 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 45 pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3 pot3: empty (add ingredient4 pot3) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3 pot3: ingredient4 (putdown ingredient4) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4 (pick ingredient5) result: ingredient5 in hand (add ingredient5 pot2) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3, ingredient5 pot3: ingredient4 (add ingredient5 pot3) result: pot1: ingredient2, ingredient3, ingredient4 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 (putdown ingredient5) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4, ingredient5 (pick ingredient6) result: ingredient6 in hand (add ingredient6 pot1) result: pot1: ingredient2, ingredient3, ingredient4, ingredient6 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 (putdown ingredient6) result: hand empty ingredient has been picked: ingredient2, ingredient3, ingredient4, ingredient5, ingredient6 No error found in action sequence. Goal state check: final state: pot1: ingredient2, ingredient3, ingredient4, ingredient6 pot2: ingredient3, ingredient5 pot3: ingredient4, ingredient5 goal state: (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient4) (contain pot1 ingredient6) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot3 ingredient4) (contain pot3 ingredient5) The final state is the same as the goal state. Goal is accomplished. Final answer: Yes, the action sequence is correct, it can accomplish the task. Ball Moving: For the Ball Moving planning domain, we use a total of five few-shot examples, which are given as follows. Example 1 Question: Robot and ball initial state: (robot-at robot1 room1) (at ball1 room4) (at ball2 room3) (at ball3 room4) | 2308.13724#84 | 2308.13724#86 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#86 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 46 Goal state: (at ball1 room4) (at ball2 room4) (at ball3 room3) Examined action sequence: (move robot1 room1 room3) (pick ball2 room3) (move robot1 room3 room4) (drop ball2 room3) (pick ball1 room4) (move robot1 room4 room3) (drop ball1 room3) (pick ball3 room4) (move robot1 room3 room4) (drop ball3 room3) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room4), (at ball2 room3), (at ball3 room4) (move robot1 room1 room3) precondition: (robot-at robot1 room1) current state: (robot-at robot1 room1) current state is the same as the precondition, action is correct change room3), (at ball3 room4) (pick ball2 room3) precondition 1: (at ball2 room3) current state: (at ball2 room3) current state is the same as the precondition 1 precondition 2: robot1 and ball2 in the same room current state: (robot-at robot1 room3), (at ball2 room3) robot1 is in the same room as ball2 two preconditions are correct, action is correct change hand), (at ball3 room4) (move robot1 room3 room4) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change hand), (at ball3 room4) (drop ball2 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room4) the current state is not the same as the precondition, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. state to: (robot-at robot1 room3), (at ball1 room4), (at ball2 state to: (robot-at robot1 room3), (at ball1 room4), (at ball2 state to: (robot-at robot1 room4), (at ball1 room4), (at ball2 Example 2 Question: | 2308.13724#85 | 2308.13724#87 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#87 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Robot and ball initial state: (robot-at robot1 room1) (at ball1 room3) (at ball2 room4) 47 (at ball3 room2) Goal state: (at ball1 room2) (at ball2 room3) (at ball3 room1) Examined action sequence: (pick ball2 room4) (move robot1 room2 room3) (drop ball2 room3) (move robot1 room3 room2) (pick ball1 room3) (move robot1 room2 room1) (drop ball1 room2) (move robot1 room1 room2) (pick ball3 room2) (move robot1 room2 room1) (drop ball3 room1) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room3), (at ball2 room4), (at ball3 room2) (pick ball2 room4) precondition 1: (at ball2 room4) current state: (at ball2 room4) current state is the same as the precondition 1 precondition 2: robot1 and ball2 in the same room current state: (robot-at robot1 room1), (at ball2 room4) robot1 is not in the same room as ball2 preconditions are not fullfilled, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. Example 3 Question: Robot and ball initial state: (robot-at robot1 room2) (at ball1 room2) (at ball2 room1) (at ball3 room3) Goal state: (at ball1 room1) (at ball2 room1) (at ball3 room3) Examined action sequence: (move robot1 room2 room1) (pick ball1 room1) (move robot1 room1 room2) (drop ball1 room2) (pick ball2 room1) (move robot1 room2 room1) (drop ball2 room1) Answer: | 2308.13724#86 | 2308.13724#88 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#88 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 48 Analysis: Initial state: (robot-at robot1 room2), (at ball1 room2), (at ball2 room1), (at ball3 room3) (move robot1 room2 room1) precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (pick ball1 room1) precondition 1: (at ball1 room1) current state: (at ball1 room2) current state is not the same as the precondition 1 preconditions are not fullfilled, action is wrong analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. # Example 4 Question: Robot and ball initial state: (robot-at robot1 room4) (at ball1 room2) (at ball2 room4) (at ball3 room3) Goal state: (at ball1 room3) (at ball2 room4) (at ball3 room4) Examined action sequence: (move robot1 room4 room2) (pick ball1 room2) (move robot1 room2 room3) (drop ball1 room3) Answer: Analysis: | 2308.13724#87 | 2308.13724#89 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#89 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Initial state: (robot-at robot1 room4), (at ball1 room2), (at ball2 room4), (at ball3 room3) (move robot1 room4 room2) precondition: (robot-at robot1 room4) current state: (robot-at robot1 room4) current state is the same as the precondition, action is correct change room4), (at ball3 room3) (pick ball1 room2) precondition 1: (at ball1 room2) current state: (at ball1 room2) current state is the same as the precondition 1 precondition 2: robot1 and ball1 in the same room current state: (robot-at robot1 room2), (at ball1 room2) robot1 is in the same room as ball1 two preconditions are correct, action is correct change room4), (at ball3 room3) (move robot1 room2 room3) state to: (robot-at robot1 room2), (at ball1 room2), (at ball2 state to: (robot-at robot1 room2), (at ball1 hand), (at ball2 49 precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room4), (at ball3 room3) (drop ball1 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room4), (at ball3 room3) No error found in action sequence. Goal state check: final state: (at ball1 room3), (at ball2 room4), (at ball3 room3) goal state: (at ball1 room3), (at ball2 room4), (at ball3 room4) The final state is not the same as the goal state. The goal is not accomplished. Final answer: No, the action sequence is wrong, it cannot accomplish the goal. # Example 5 Question: | 2308.13724#88 | 2308.13724#90 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#90 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Robot and ball initial state: (robot-at robot1 room1) (at ball1 room4) (at ball2 room1) (at ball3 room1) Goal state: (at ball1 room2) (at ball2 room1) (at ball3 room3) Examined action sequence: (pick ball3 room1) (move robot1 room1 room3) (drop ball3 room3) (move robot1 room3 room4) (pick ball1 room4) (move robot1 room4 room2) (drop ball1 room2) Answer: Analysis: Initial state: (robot-at robot1 room1), (at ball1 room4), (at ball2 room1), (at ball3 room1) (pick ball3 room1) precondition 1: (at ball3 room1) current state: (at ball3 room1) current state is the same as the precondition 1 precondition 2: robot1 and ball3 in the same room current state: (robot-at robot1 room1), (at ball3 room1) robot1 is in the same room as ball1 two preconditions are correct, action is correct change room1), (at ball3 hand) (move robot1 room1 room3) precondition: (robot-at robot1 room1) current state: (robot-at robot1 room1) state to: (robot-at robot1 room1), (at ball1 room4), (at ball2 | 2308.13724#89 | 2308.13724#91 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#91 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 50 current state is the same as the precondition, action is correct change room1), (at ball3 hand) (drop ball3 room3) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (move robot1 room3 room4) precondition: (robot-at robot1 room3) current state: (robot-at robot1 room3) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (pick ball1 room4) precondition 1: (at ball1 room4) current state: (at ball1 room4) current state is the same as the precondition 1 precondition 2: robot1 and ball1 in the same room current state: (robot-at robot1 room4), (at ball1 room4) robot1 is in the same room as ball1 two preconditions are correct, action is correct change room1), (at ball3 room3) (move robot1 room4 room2) precondition: (robot-at robot1 room4) current state: (robot-at robot1 room4) current state is the same as the precondition, action is correct change room1), (at ball3 room3) (drop ball1 room2) precondition: (robot-at robot1 room2) current state: (robot-at robot1 room2) current state is the same as the precondition, action is correct change room1), (at ball3 room3) No error found in action sequence. Goal state check: final state: (at ball1 room2), (at ball2 room1), (at ball3 room3) goal state: (at ball1 room2), (at ball2 room1), (at ball3 room3) The final state is the same as the goal state. The goal is accomplished. Final answer: Yes, the action sequence is correct, it can accomplish the task. | 2308.13724#90 | 2308.13724#92 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#92 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Blocksworld: For the Blocksworld planning domain, we use a total of six few-shot examples, which are given as follows. Example 1 Question: Block initial state: (on-table b1) (on b2 b1) (on b3 b2) Goal state: (on b1 b2) 51 (on b2 b3) (on-table b3) Examined action sequence: (unstack b3 b2) (putdown b3) (unstack b2 b1) (putdown b2) (unstack b1 b2) (putdown b1) Answer: Analysis: Initial: b1 on the table, b2 on b1, b3 on b2 (unstack b3 b2) result: b1 on the table, b2 on b1, b3 in the hand (putdown b3) result: b1 on the table, b2 on b1, b3 on the table (unstack b2 b1) result: b1 on the table, b2 in hand, b3 on the table (putdown b2) result: b1 on the table, b2 on the table, b3 on the table (unstack b1 b2) result: the action is wrong since b1 is not on top of b2 analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. # Example 2 Question: Block initial state: (on b1 b3) (on-table b2) (on-table b3) Goal state: (on b3 b2) (on b2 b1) (on-table b1) Examined action sequence: (unstack b1 b3) (putdown b1) (pickup b2) (stack b2 b1) (pickup b3) (stack b3 b2) Answer: Analysis: | 2308.13724#91 | 2308.13724#93 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#93 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | Initial: b1 on b3, b2 on the table, b3 on the table (unstack b1 b3) result: b1 in hand, b2 on the table, b3 on the table (putdown b1) result: b1 on the table, b2 on the table, b3 on the table (pickup b2) result: b1 on the table, b2 in hand, b3 on the table (stack b2 b1) result: b1 on the table, b2 on b1, b3 on the table (pickup b3) result: b1 on the table, b2 on b1, b3 in hand (stack b3 b2) result: b1 on the table, b2 on b1, b3 on b2 No error found in action sequence. Goal state check: the final state is the same as the goal state Final answer: Yes, the action sequence is correct, it can accomplish the task. | 2308.13724#92 | 2308.13724#94 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#94 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 52 # Example 3 Question: Block initial state: (on b1 b3) (on-table b2) (on b3 b2) Goal state: (on b2 b3) (on b3 b1) (on-table b1) Examined action sequence: (unstack b1 b3) (putdown b1) (unstack b3 b2) (putdown b3) (pickup b2) (stack b2 b3) Answer: Analysis: Initial: b1 on b3, b2 on the table, b3 on b2 (unstack b1 b3) result: b1 in hand, b2 on the table, b3 on b2 (putdown b1) result: b1 on the table, b2 on the table, b3 on b2 (unstack b3 b2) result: b1 on the table, b2 on the table, b3 in hand (putdown b3) result: b1 on the table, b2 on the table, b3 on the table (pickup b2) result: b1 on the table, b2 in hand, b3 on the table (stack b2 b3) result: b1 on the table, b2 on b3, b3 on the table No error found in action sequence. Goal state check: the final state is not the same as the goal state Final answer: No, the action sequence is wrong, it cannot accomplish the goal. Example 4 Question: Block initial state: (on b1 b3) (on-table b2) (on-table b3) Goal state: (on b1 b3) (on b3 b2) (on-table b2) Examined action sequence: (pickup b2) (stack b2 b1) (pickup b3) (stack b3 b2) (pickup b1) (stack b1 b3) Answer: Analysis: Initial: b1 on b3, b2 on the table, b3 on the table (pickup b2) result: b1 on b3, b2 in hand, b3 on the table (stack b2 b1) result: b1 on b3, b2 on b1, b3 on the table 53 | 2308.13724#93 | 2308.13724#95 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#95 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | (pickup b3) result: the action is wrong since b3 is not clear to move analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. # Example 5 Question: Block initial state: (on-table b1) (on-table b2) (on-table b3) Goal state: (on b2 b3) (on b3 b1) (on-table b1) Examined action sequence: (pickup b3) (stack b3 b1) (pickup b2) (stack b2 b3) Answer: Analysis: Initial: b1 on the table, b2 on the table, b3 on the table (pickup b3) result: b1 on the table, b2 on the table, b3 in hand (stack b3 b1) result: b1 on the table, b2 on the table, b3 on b1 (pickup b2) result: b1 on the table, b2 in hand, b3 on b1 (stack b2 b3) result: b1 on the table, b2 on b3, b3 on b1 No error found in action sequence. Goal state check: the final state is the same as the goal state Final answer: Yes, the action sequence is correct, it can accomplish the task. Example 6 Question: Block initial state: (on b1 b2) (on b2 b3) (on-table b3) Goal state: (on b1 b3) (on b3 b2) (on-table b2) Examined action sequence: (unstack b1 b2) (putdown b1) (pickup b1) (stack b1 b3) (unstack b3 b2) (putdown b3) (pickup b3) (stack b3 b2) (unstack b2 b3) (putdown b2) | 2308.13724#94 | 2308.13724#96 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#96 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 54 (pickup b2) (stack b2 b3) Answer: Analysis: Initial: b1 on b2, b2 on b3, b3 on the table (unstack b1 b2) result: b1 in hand, b2 on b3, b3 on the table (putdown b1) result: b1 on the table, b2 on b3, b3 on the table (pickup b1) result: b1 in hand, b2 on b3, b3 on the table (stack b1 b3) result: the action is wrong since b3 is not clear to move analysis stops due to error Final answer: No, the action sequence is wrong, it cannot accomplish the goal. | 2308.13724#95 | 2308.13724#97 | 2308.13724 | [
"2211.09935"
]
|
2308.13724#97 | ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning | 55 | 2308.13724#96 | 2308.13724 | [
"2211.09935"
]
|
|
2308.13149#0 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | 3 2 0 2 g u A 5 2 ] L C . s c [ 1 v 9 4 1 3 1 . 8 0 3 2 : v i X r a # SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research Liangtai Sun, Yang Han, Zihan Zhao, Da Ma, Zhennan Shen, Baocai Chen, Lu Chenâ and Kai Yu*. X-LANCE Lab, Department of Computer Science and Engineering Artificial Intelligence, AI Institute, Shanghai Jiao Tong University Shanghai Jiao Tong University, Shanghai, China {slt19990817, csyanghan, zhao mengxin, mada123}@sjtu.edu.cn {ieee-szn, 15368493547, chenlusz, kai.yu}@sjtu.edu.cn # Abstract Recently, there has been growing interest in using Large Language Models (LLMs) for scientific research. Numerous benchmarks have been proposed to evaluate the ability of LLMs for scientific research. However, current benchmarks are mostly based on pre-collected objective questions. This design suffers from data leakage problem and lacks the eval- uation of subjective Q/A ability. In this paper, we propose SciEval, a comprehensive and multi-disciplinary evaluation benchmark to address these issues. Based on Bloomâ s taxon- omy, SciEval covers four dimensions to systematically eval- uate scientific research ability. In particular, we design a â dy- namicâ subset based on scientific principles to prevent eval- uation from potential data leakage. Both objective and sub- jective questions are included in SciEval. These characteris- tics make SciEval a more effective benchmark for scientific research ability evaluation of LLMs. Comprehensive exper- iments on most advanced LLMs show that, although GPT-4 achieves SOTA performance compared to other LLMs, there is still substantial room for improvement, especially for dy- namic questions. | 2308.13149#1 | 2308.13149 | [
"2307.03109"
]
|
|
2308.13149#1 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | The data and codes are now publicly avail- able1. Introduction Large Language Models (LLMs), such as ChatGPT (Schul- man et al. 2022), have attracted widespread attention in general scenarios, including information search, code gen- eration, and more. In the field of science, LLMs have also shown preliminary potential in improving scientific research efficiency and transforming scientific research paradigms (Blanco-Gonzalez et al. 2023; WANG and MIAO 2023). In the meanwhile, several scientific LLMs have been proposed by researchers (Taylor et al. 2022; Luo et al. 2022; Frey et al. 2022). In the general field, there are already numerous evaluation benchmarks to evaluate the language understanding, language generation and reasoning capabil- ities of LLMs, such as MMLU (Hendrycks et al. 2020), AGIEval (Zhong et al. 2023), and C-EVAL (Huang et al. 2023), shown in Table 1. Although these benchmarks cover data of science domain, the data sources are usually con- fined to educational materials, which can not adequately as- sess the research ability of LLMs and not align with real-life scientific research scenarios. In addition, some benchmarks have been proposed to evaluate the scientific capability of LLMs, such as MultiMedQA (Singhal et al. 2023), Chem- LLMBench (Guo et al. 2023), and MATH (Hendrycks et al. 2021), while these benchmarks are restricted to a specific scientific discipline, leaving a lack of a more general scien- tific evaluation benchmark.2 In addition, these benchmarks (1) lack evaluation systems for scientific capabilities, (2) are all based on objective questions, which are insufficient to as- sess scientific abilities, and (3) face the risk of data leakage. In response to this gap, we present SciEval, an English benchmark designed to evaluate advanced abilities of LLMs in the scientific domain. SciEval consists of a total of about 18000 challenging scientific questions, spanning three im- portant basic science fields: chemistry, physics and biology, each of which is further divided into multiple sub-topics. SciEval mainly has the following three characteristics: | 2308.13149#0 | 2308.13149#2 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#2 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | â ¢ Multi-level and comprehensive evaluation of the abil- ity of LLMs in the scientific field. Scientific abil- ity of LLMs needs to be evaluated from multiple as- pects. Leveraging cognitive domains of Bloomâ s taxon- omy (Krathwohl 2002; Forehand 2010), which covers six levels, SciEval evaluates the scientific capabilities of large language models across four dimensions: ba- sic knowledge, knowledge application, scientific calcu- lation, and research ability, where each capability aligns with one or more cognitive levels. | 2308.13149#1 | 2308.13149#3 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#3 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | â ¢ Combination of objective and subjective questions. SciEval is mainly based on objective questions, which al- low for quick and standard model evaluations, involving multiple-choice, fill-in-the-blank, and judgment ques- tions. These questions can help us understand whether the model can correctly understand and memorize sci- entific knowledge. However, objective questions are in- sufficient to assess scientific capability holistically. To better assess scientific reasoning and application ability, SciEval introduces a small number of subjective ques- tions, involving a total of twelve basic science experi- ments, which is named Experimental Data. | 2308.13149#2 | 2308.13149#4 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#4 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | # â ¢ Dynamic data generation based on basic scientific *The corresponding authors are Lu Chen and Kai Yu. 1https://github.com/OpenDFM/BAI-SciEval 2Due to the page limitation, we only compare some widely used benchmarks. For more information, we refer to (Chang et al. 2023). Name Category Ability Source Data Type Dynamic #Data MMLU humanities, social science, STEM, other BK, KA, SC exam, book, course objective â 14079 AGIEval social science, STEM BK, KA, SC exam objective â 8062 C-EVAL humanities, social science, STEM, other BK, KA, SC exam objective â 12342 MultiMedQA medical BK, KA, RA exam, research objective â 13115 ChemLLMBench chemistry BK,KA knowledge base objective â 800 MATH mathematics SC exam objective â 5000 SciEval science BK, KA,SC, RA community QA, knowledge base objective + subjective â 15901 Table 1: | 2308.13149#3 | 2308.13149#5 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#5 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Dataset comparison of SciEval and some other datasets covering science domain.â BKâ stands for Basic Knowledge, â KAâ stands for Knowledge Application, â SCâ stands for Scientific Calculation, and â RAâ stands for Research Ability. principles. The huge amount of training data used for pre-training LLMs may cause the risk of data leakage for evaluation. In order to solve this problem, one of the main features of SciEval is the use of Dynamic Data, which can prevent potential data leakage and ensure the fairness and credibility of the evaluation results. The Dynamic Data will be updated regularly, and we will maintain a stable version to make a fair comparison of model perfor- mance. And the objective questions other than Dynamic Data are referred to as Static Data. | 2308.13149#4 | 2308.13149#6 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#6 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | We conduct experiments to evaluate LLMs on SciEval in answer-only, chain-of-thought and few-shot settings. Re- sults indicate that GPT-4 is the strongest model, with only GPT-4, GPT-3.5-turbo and Claude-v1.3 surpassing 60% av- erage accuracy on the Static Data, signifying considerable opportunities for improvement. With the results of Dynamic Data, we find that these LLMs have little knowledge about molecules, and most models could only retain near-random accuracy in the physics subset. As for Experimental Data, some top-tier models could perform satisfactorily in exper- imental principle and designing, while almost all models struggle to analyze the experimental results. With the anal- ysis of experiment results, we claim that training on large- scale scientific corpus is helpful for the scientific capability of LLMs, and most LLMs perform bad on calculation prob- lems, especially in physics domain. We hope that SciEval can provide an excellent benchmark for the assessment of scientific capability of LLMs, and promote the wide appli- cation in science. Big-Bench (Srivastava et al. 2022) introduces 204 chal- lenging tasks covering various domains, aiming to evaluate tasks beyond the capabilities of existing language models. AGIEval (Zhong et al. 2023) serves as an evaluation frame- work for assessing the performance of foundation models in human-centric standardized exams. C-Eval (Huang et al. 2023) assesses the advanced knowledge and reasoning capa- bilities of foundation models in Chinese. Specific Benchmarks for LLMs Apart from general tasks, specific benchmarks are designed for certain downstream tasks. MultiMedQA (Singhal et al. 2023) focuses on medical question-answering, evaluating LLMs in terms of clinical knowledge and QA abilities. MATH (Hendrycks et al. 2021) assesses reasoning and problem-solving proficiencies of LLMs in mathematics. Sci- enceQA (Lu et al. 2022) proposes a multi-modal benchmark with a diverse set of science topics and annotations of their answers with corresponding lectures and explanations, col- lected from elementary and high school science curricula. | 2308.13149#5 | 2308.13149#7 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#7 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | SCIBENCH (Wang et al. 2023) examines the reasoning ca- pabilities required for complex scientific problem-solving and proposes two datasets of college-level scientific prob- lems. Compared to these benchmarks, SciEval (1) evalu- ates scientific capabilities from multiple aspects, having a broader coverage, (2) uses data of community Q&A, which is more flexible and diverse, (3) designs a subset of dynamic data, making an effort to mitigate data leakage. # Related Work General Benchmarks for LLMs To evaluate the performance of LLMs across dif- ferent tasks, several benchmarks have been proposed. MMLU (Hendrycks et al. 2020) aims to develop a compre- hensive test for evaluating text models in multi-task con- texts. HELM (Liang et al. 2022) offers a comprehensive assessment, evaluating LLMs across various aspects, such as language understanding and common-sense reasoning. The SciEval dataset In this section, we first introduce the evaluation system of SciEval (§), followed by the data collection process (§). And finally, we show the data statistics (§). Scientific Research Evaluation System Scientific research requires different dimensions of knowl- edge, such as understanding and calculation, thence evalua- tion of scientific ability should be conducted at multiple lev- els. | 2308.13149#6 | 2308.13149#8 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#8 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Bloomâ s taxonomy (Krathwohl 2002; Forehand 2010) Three disciplines logy, â Oe uP conduction 2 & g 2 3 â Spuog jean? On oe n NOâ Men. Mu, circular ey Chap *anics, therm0 Research Ability Scientific Calculation Knowledge Application Basic Knowledge Four Abilities Cognitive Level Understand Remember Figure 1: The illustration of the evaluation system. SciEval covers three disciplines with amounts of sub-topics, and investigates four abilities, corresponding to six cognitive levels. is a set of three hierarchical methods used for classification of educational learning objectives covering cognitive, affec- tive and psychomotor domains. The cognitive domain is fre- quently used to structure curriculum learning objectives, as- sessments and activities, and is broken into six levels: Re- member, Understand, Apply, Analyze, Evaluate and Create, as is shown in Figure 1, which are suitable for the evaluation of scientific capability. Based on the cognitive domain of Bloomâ s taxonomy, the evaluation system of SciEval consists of four knowledge di- mensions: Basic Knowledge, Knowledge Application, Scien- tific Calculation, and Research Ability. As is shown in Fig- ure 1, Basic Knowledge primarily assesses the fundamen- tal scientific knowledge of LLMs. Knowledge Application focuses on how to apply basic knowledge to solve scien- tific problems, requiring models to have comprehension, ap- plication, and analysis abilities. Scientific Calculation is a specialized application of knowledge that further examines complex reasoning capabilities of LLMs based on their gen- eral knowledge application abilities. Research Ability as- sesses evaluation capabilities at a higher cognitive level, re- quiring models to participate in various aspects of scientific research, including problem formulation, experimental de- sign, data analysis, and summarization. Based on the evaluation system, we design three different types of data: Static Data, Dynamic Data, and Experimen- tal Data. The Static Data covers all these four knowledge dimensions and will remain constant throughout, while the Dynamic Data examines from the aspects of Knowledge Ap- plication and Scientific Calculation and will be regularly up- dated to prevent any data leakage. The Experimental Data comprises a set of questions for twelve scientific experi- ments and can be used to evaluate the Research Ability. Q&A3, a community-driven website that covers a wide range of subjects such as science and literature. | 2308.13149#7 | 2308.13149#9 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#9 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Specifically, we collect data from the fields of biology, chemistry, and physics. To ensure quality, we employ rule-based methods to preprocess the crawled data. While gathering the ques- tions, we found that not all of them are suitable as titles. To address this, we utilize GPT-4 with the â Task 1â prompt, as depicted in Figure 2, to process these questions. Since most of the collected questions are open-ended and challenging to evaluate, we employ GPT-4 to simplify ground-truth an- swers and generate three wrong answers to formulate them as multiple-choice questions. Additionally, we classify the questions into their respective knowledge domains. And dur- ing this process, we manually check the generated content of GPT-4 to ensure data quality. To make the dataset more diverse and comprehensive, we further integrate data from some publicly available datasets: MedQA (Jin et al. 2021) is a free-form multiple-choice OpenQA dataset for solving medical problems, collected from professional medical board exams. We use the test set of USMLE, which is the English subset of MedQA. â ¢ PubMedQA (Jin et al. 2019) is a biomedical question- answering dataset collected from PubMed abstracts. The task of PubMedQA is to answer research questions with yes/no/maybe using the corresponding abstracts, which is fit for evaluating the literature comprehension ability. We incorporate 1000 expert-annotated data from it and frame them as judgment questions. | 2308.13149#8 | 2308.13149#10 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#10 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | â ¢ Reagent Selection (Guo et al. 2023) involves the iden- tification and proposal of the most fitting reagents for a specific chemical reaction or process, which is a subset of ChemLLMBench. We randomly select 40% data and formulate them as multiple-choice questions. # Data Collection Dynamic Data The current training of LLMs often uses a large amount of data, resulting in a risk of data leakage for evaluation. In order to solve this problem, we design a Static Data The collection steps of Static Data are shown in Figure 2. The primary source of Static Data is Socratic 3https://socratic.org Instruction: Given a question and its ground-truth answer, judge whether it is suitable to be used as the title of a multiple-choice question. Your answer should be "YES" or = explanation. Socratic Q&A Static Data PubMedQA Instruction: Given a question and a ground-truth answer, please simplify the answer as concise as possible. And I want to generate a 4-choice question using it, please generate! 3 fake answers for me. Note that the length of the simplified answer and these 3 fake lanswers should be about the same and these 3 fake answers should be as confusing as possible. Furthermore, please help me to classify the domain of the question. There are three domains in total: Base Knowledge, Scientific Calculation, Knowledge Application. | 2308.13149#9 | 2308.13149#11 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#11 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Reagent Selection NO". And please directly give the results without any GPT-4 Figure 2: Data Collection steps of Static Data â dynamicâ subset, which can generate data dynamically ac- cording to scientific principles. The dynamic subset covers two disciplines, chemistry and physics. For chemistry data, we use the basic information and properties of molecules crawled from PubChem4 to create data. For physics data, we manually write some Python scripts according to the physics formulas. When obtaining the evaluation dataset, we will provide a regenerated version to users and we will update it regularly, while at the same time, we will maintain a sta- ble version of the dynamic data to make a fair comparison. these questions are in English and we show some data ex- amples in Appendix D. For Static Data, we further split the data into dev, valid, and test set. For each data source, each knowledge domain, and each discipline, we randomly select 5 data to form the dev set, which can be used for few-shot learning, and we split the remaining data with a ratio of 1:9 to construct the valid set and test set respectively. # Experiment Experimental Data To better evaluate the scientific thoughts and abilities of LLMs, SciEval introduces a sub- set of experimental data, involving 12 different basic scien- tific experiments. These experiments are collected from ba- sic science experiment courses at university, and each exper- iment conducts a comprehensive investigation of the ability of LLMs in scientific research and experimentation from the perspectives of experimental principle, process, and analysis and summarization of experimental results. | 2308.13149#10 | 2308.13149#12 | 2308.13149 | [
"2307.03109"
]
|
2308.13149#12 | SciEval: A Multi-Level Large Language Model Evaluation Benchmark for Scientific Research | Given a question and four options, please select the right answer. Your answer should be "A", "B", "C" or "D". Please directly give the answer without any explanation. How many atoms are in 3.5 moles of arsenic atoms? A. 1.5 x 10°24 atoms B. 3.0 x 10°24 atoms C. 2.7 x 10°24 atoms D. 2.1 x 10°24 atoms Answer: D | 2308.13149#11 | 2308.13149#13 | 2308.13149 | [
"2307.03109"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.