doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.14296
46
Sequential Recommendation. For sequential recom- mendation, the Agent takes the names of the user’s histor- ically interacted items in order as input. Then the agent is prompted to predict the title of the next item that the user might interact with. Figure 2 shows an example of sequential recommendation in the beauty domain of Amazon Reviews. For a specific user {userID} with the interaction history Table 4: Performance comparison on explanation generation on Amazon Reviews (Beauty) and Yelp.
2308.14296#46
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
47
Methods Beauty Yelp BLEU2 ROGUE1 ROGUE2 ROGUEL BLEU2 ROGUE1 ROGUE2 ROGUEL P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.9783 0.0359 1.1766 0.8985 1.3096 1.3054 1.3159 1.1589 1.3459 17.0412 9.7892 11.8905 11.0597 12.7987 12.8249 12.8975 11.6794 13.2560 1.8962 0.7994 2.5894 1.9675 2.7015 2.7050 2.7125 2.2460 2.7479 12.1709 5.1215 5.8920 7.7471 8.0164 8.0596 8.1150 7.8974 8.9614 1.2784 0.0419 1.1766 1.1052 1.2759 1.2960
2308.14296#47
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
49
Table 5: Performance comparison on review summarization on Amazon Reviews (Beauty). Methods Beauty BLEU2 ROGUE1 ROGUE2 ROGUEL P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.0357 0.6532 0.9137 1.3596 1.3786 1.3737 1.3798 1.3688 1.4014 8.3079 3.8579 4.0179 5.0279 5.5397 5.4187 5.5794 5.4579 6.0354 1.5892 0.3059 0.4179 0.7156 0.8456 0.8254 0.8351 0.8974 1.0128 7.4820 3.3552 3.6790 4.7689 4.8024 4.8157 4.8976 4.9746 5.5716
2308.14296#49
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
50
formance of RecMind in summarizing review comments to shorter review titles. We filter out test data with automati- cally generated review titles such as ’Five Stars’. Figure 2 shows an example of review summarization in the beauty domain of Amazon Reviews. The results of the review sum- marization on Amazon Reviews are shown in Table 5. The result shows that RecMind agent performs better that recent LLM such as ChatGPT. However, RecMind does not outper- form P5 regarding the review summarization. This perfor- mans comes from the advantage of P5 which fully trained model towards optimizaing the review summarization task. In contrast, GPT-based models, such as RecMind, usually prioritize generating summaries after deeply understanding the reviews.
2308.14296#50
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
51
in chronological order, the agent will be prompted, “user {userID} has interacted with the following items in chrono- logical order: [‘Item List’]. Please recommend the next item that the user might interact with. Choose the top 10 prod- ucts to recommend in order of priority, from highest to low- est.”. We include another baseline S3-Rec (Zhou et al. 2020) which leverages self-supervised objectives to help sequen- tial recommendation model better discover the correlations among different items and their attributes. The results of se- quential recommendation on Amazon Reviews (beauty do- main) and Yelp are shown in Table 3. It is observed from the results that RecMind with Self-Inspiring achieves com- parable performance as fully-trained models P5 and S3-Rec. Without diverse planning methods such as tree-of-thoughts and our proposed self-inspiring, LLMs prefer items whose names are semantically similar to the names of proceeding items. In contrast, with the help of explicit reasoning meth- ods as well as access to domain knowledge, RecMind grad- ually explores helpful information such as connections of items in the database with other users’ interaction history. # 4.4 Experimental Results on Explainability-oriented Recommendation Tasks
2308.14296#51
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
52
# 4.4 Experimental Results on Explainability-oriented Recommendation Tasks With the development of NLP techniques on recommenda- tion tasks, recent works (Geng et al. 2022) start to explore how NLP models can improve the explainability of recom- mendation systems, such as generating text explanations for a given recommendation, or a given interaction between a user and an item. In this section, we evaluate the perfor- mance of RecMind in two explainability-oriented recom- mendation tasks, which are explanation generation and re- view summarization. Explanation Generation. In explanation generation, we assess the performance of RecMind in crafting textual expla- nations that justify a user’s interaction with a specific item. Figure 2 shows an example of explanation generation in the beauty domain of Amazon Reviews. The text review given by the user on the given item is taken as the ground truth. The results of explanation generation on Amazon Reviews and Yelp are summarized in Table 4. The results indicate that RecMind, when leveraging self-inspiring techniques, can achieve performance comparable to the fully trained P5 model. This is aided by the in-domain knowledge retrieved from personalized memory, such as reviews from other users on the same item. # 4.5 Transfer to Items in Unseen Domains
2308.14296#52
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
53
# 4.5 Transfer to Items in Unseen Domains The advantage of using a large language model as a unified recommendation model is that it can judge the likelihood of any event by expressing the event in natural language. In our experiments in Section 4.3, we found that RecMind with in-domain few-shot examples achieves much better perfor- mance. In this section, we aim to test how few-shot Rec- Mind performs on recommending items from unseen do- mains. Specifically, we include few-shot examples in the Beauty domain and test the performance of RecMind on rat- ing prediction, direct recommendation, and explanation gen- eration with test data in the Toys and Sports domain. We in- clude ChatGPT prompting baseline and P5 for comparisons. In the few-shot ChatGPT baseline, the user-specific exam- ples included in the prompts are from the Beauty domain. In the P5, the model trained on the Beauty domain is used for evaluation. We evaluate the domain transfer capabilities of all approaches on rating prediction, direct recommendaReview Summarization. In this task, we evaluate the perTable 6: Performance on domain transfer. Comparisons are performed on MAE for rating prediction, HR@5 for direct recommendation, and BLEU2 for explanation generation.
2308.14296#53
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
54
Methods P5 ChatGPT RecMind-ToT RecMind-SI Domain Beauty → Toys Beauty → Sports Beauty → Toys Beauty → Sports Beauty → Toys Beauty → Sports Beauty → Toys Beauty → Sports MAE 0.7932 0.7013 0.7354 0.6895 0.6845 0.6457 0.6779 0.6245 HR@5 BLEU2 0.0852 0.1007 0.0649 0.7210 0.0841 0.0924 0.0902 0.1124 1.4326 0.8924 1.4416 0.8795 1.3994 1.0002 1.5940 1.0537 tion, and explanation generation. We report the MAE for rat- ing prediction, HR@5 for direct recommendation, and the BLEU2 for explanation in Table 6. It can be observed that RecMind shows better domain transfer performance com- pared with the baselines P5 and ChatGPT. In contrast, fine- tuned language model P5 tends to overfit to the domain of the training data.
2308.14296#54
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
55
4.6 Human Evaluation In this section, we leverage human evaluation to assess the quality and rationality of the explanation generated by Rec- Mind. Three human evaluators (Eva 1, Eva 2, Eva 3) are asked to rank the explanations generated by P5, few-shot ChatGPT, few-shot RecMind with tree-of-thoughts, few- shot RecMind with self-inspiring and the ground truth on 100 test data. We show the top-1 ratios on results gener- ated by different methods in Table 7 for each evaluator. The top-1 ratio indicates the proportion of test data where the given method ranks first compared to other methods based on each annotator’s selection. We also calculate the aver- age top-1 ratios of all three evaluators on results generated by each method. Although annotators may have individual subjectivity, evaluations by different evaluators consistently show that the few-shot RecMind based on self-inspiring, i.e., RecMind-SI yields the most satisfactory results. Table 7: Human evaluation results on explanation genera- tion.
2308.14296#55
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
56
Table 7: Human evaluation results on explanation genera- tion. Methods Evaluator Average Eva 1 Eva 2 Eva 3 Ground Truth P5 ChatGPT RecMind-ToT RecMind-SI 0.12 0.02 0.15 0.29 0.42 0.13 0.06 0.23 0.28 0.30 0.22 0.03 0.18 0.25 0.32 0.157 0.037 0.187 0.273 0.347 5 Conclusions In this work, we propose a novel LLM-powered autonomous agent RecMind for various recommendation tasks. The Rec- Mind consists of three major components, i.e., planning which breaks down a task into smaller sub-tasks, memory which provides the agent with the capability to retain and recall information over extended periods, and tools for ob- taining relevant extra information from memory that is missing from model weights. We further propose a novel plan- ning technique self-inspiring, which can integrate the merits of multiple reasoning paths for better planning. We evalu- ate RecMind across various recommendation tasks, includ- ing both precision-oriented tasks and explanability-oriented tasks. The evaluation results show that RecMind with self- inspiring outperforms existing LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which is fully pre-trained for the recommendation task.
2308.14296#56
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
57
References Anil, R.; Dai, A. M.; Firat, O.; Johnson, M.; Lepikhin, D.; Passos, A.; Shakeri, S.; Taropa, E.; Bailey, P.; Chen, Z.; et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Bao, K.; Zhang, J.; Zhang, Y.; Wang, W.; Feng, F.; and He, X. 2023. Tallrec: An effective and efficient tuning frame- work to align large language model with recommendation. arXiv preprint arXiv:2305.00447. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877– 1901. Chase, H. 2023. langchain. GitHub repository. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.;
2308.14296#57
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
58
repository. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. 2016. Wide & deep learning for recommender sys- tems. In Proceedings of the 1st workshop on deep learning for recommender systems, 7–10. Fan, W.; Zhao, Z.; Li, J.; Liu, Y.; Mei, X.; Wang, Y.; Tang, J.; and Li, Q. 2023. Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046. Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; and Zhang, Y. 2022. Rec- ommendation as language processing (rlp): A unified pre- train, personalized prompt & predict paradigm (p5). In Pro- ceedings of the 16th ACM Conference on Recommender Sys- tems, 299–315. Gravitas, S. 2023. Auto-GPT. GitHub repository. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and
2308.14296#58
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
59
2023. Auto-GPT. GitHub repository. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convo- lution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 639–648. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Hou, Y.; Zhang, J.; Lin, Z.; Lu, H.; Xie, R.; McAuley, J.; and Zhao, W. X. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint
2308.14296#59
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
61
Koren, Y.; Bell, R.; and Volinsky, C. 2009a. Matrix fac- torization techniques for recommender systems. Computer, 42(8): 30–37. Koren, Y.; Bell, R. M.; and Volinsky, C. 2009b. Matrix Factorization Techniques for Recommender Systems. Com- puter, 42. Lin, J.; Dai, X.; Xi, Y.; Liu, W.; Chen, B.; Li, X.; Zhu, C.; Guo, H.; Yu, Y.; Tang, R.; and Zhang, W. 2023. How Can Recommender Systems Benefit from Large Language Mod- els: A Survey. ArXiv, abs/2306.05817. Linden, G.; Smith, B.; and York, J. 2003. Amazon.com Rec- ommendations: Item-to-Item Collaborative Filtering. IEEE Distributed Syst. Online, 4. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. B. 2023. Is ChatGPT a Good Recommender? A Preliminary Study. ArXiv, abs/2304.10149. Nakajima, Y. 2023. babyagi. GitHub repository. Nakano, R.; Hilton, J.;
2308.14296#61
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
62
abs/2304.10149. Nakajima, Y. 2023. babyagi. GitHub repository. Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. Ni, J.; Li, J.; and McAuley, J. 2019. Justifying recommen- dations using distantly-labeled reviews and fine-grained as- In Proceedings of the 2019 conference on empiri- pects. cal methods in natural language processing and the 9th in- ternational joint conference on natural language processing (EMNLP-IJCNLP), 188–197. OpenAI, R. 2023. GPT-4 technical report. arXiv, 2303– 08774. Park, J. S.; O’Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative agents: Interactive simulacra of human behavior. arXiv
2308.14296#62
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
63
Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. Patil, S. G.; Zhang, T.; Wang, X.; and Gonzalez, J. E. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023. Toolformer: Language Models Can Teach Themselves to Use Tools. ArXiv, abs/2302.04761. Schulman, J.; Zoph, B.; Kim, C.; Hilton, J.; Menick, J.; Weng, J.; Uribe, J. F. C.; Fedus, L.; Metz, L.; Pokorny, M.; et al. 2022. ChatGPT: Optimizing language models for dia- logue. OpenAI blog. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and
2308.14296#63
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
64
Optimizing language models for dia- logue. OpenAI blog. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and Zhuang, Y. 2023. Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidi- rectional encoder representations from transformer. In Pro- ceedings of the 28th ACM international conference on infor- mation and knowledge management, 1441–1450. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971.
2308.14296#64
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
65
Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, L.; and Lim, E.-P. 2023. Zero-Shot Next-Item Rec- ommendation using Large Pretrained Language Models. ArXiv, abs/2304.03153. Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021. Finetuned arXiv preprint language models are zero-shot learners. arXiv:2109.01652. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; hsin Chi, E. H.; Xia, F.; Le, Q.; and Zhou, D. 2022. Chain of Thought Prompting Elicits Reasoning in Large Language Models. ArXiv,
2308.14296#65
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
66
Xia, F.; Le, Q.; and Zhou, D. 2022. Chain of Thought Prompting Elicits Reasoning in Large Language Models. ArXiv, abs/2201.11903. Yang, F.; Chen, Z.; Jiang, Z.; Cho, E.; Huang, X.; and Lu, Y. 2023. PALR: Personalization Aware LLMs for Recommen- dation. arXiv e-prints, arXiv–2305. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. Tree of Thoughts: Deliber- ate Problem Solving with Large Language Models. ArXiv, abs/2305.10601. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2022. React: Synergizing reasoning and act- ing in language models. arXiv preprint arXiv:2210.03629. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen,
2308.14296#66
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
68
# A Appendix A.1 Ablation Study on Foundation LLMs In this section, we study how RecMind performs with differ- ent types of foundation LLMs as controllers. We test Rec- Mind with self-inspiring based on three different types of LLMs, including GPT-3.5, text-davinci-003, and GPT-4 for sequential recommendation on three different domains in Amazon Reviews. The results are illustrated in Figure 4. It can be observed from the results that the performance of RecMind is not sensitive to the selection of Foundation LLMs. Although GPT-4 demonstrates enhanced reasoning in addressing complex problems, GPT-3.5 can also deliver commendable performance when leveraging the superior ca- pabilities of the RecMind framework. 0.07 Hy GPT-3.5 0.06; GG text-davinci-003 ay GPT-4 wp 0.05 © [4 = 0.04 . i 0.02 Beauty Sports Toys Figure 4: Performance comparison of RecMind-SI with dif- ferent types of foundation LLMs. # A.2 Additional Experiment Results on Amazon Reviews
2308.14296#68
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
69
Figure 4: Performance comparison of RecMind-SI with dif- ferent types of foundation LLMs. # A.2 Additional Experiment Results on Amazon Reviews In this section, we provide additional experiment results of RecMind and all compared methods on the Sports domain and Toys domain in Amazon Reviews. The results in rat- ing prediction on the Sports and Toys domains of Amazon Reviews are shown in Table 8. The results in the direct rec- ommendation on the Sports and Toys domains of Amazon Reviews are shown in Table 9 and Table 10, respectively. The results in the direct recommendation on the Sports and Toys domains of Amazon Reviews are shown in Table 11 and Table 12, respectively. As indicated in the experimen- tal results, RecMind also shows good performance on data from other domains of Amazon Reviews. Table 8: Performance comparison in rating prediction on Sports and Toys domains of Amazon Reviews.
2308.14296#69
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
70
Table 8: Performance comparison in rating prediction on Sports and Toys domains of Amazon Reviews. Methods Sports RMSE MAE Toys RMSE MAE MF MLP P5 (fine-tuned,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.0274 1.1277 1.0534 1.2723 1.0929 1.1490 1.0325 1.0307 1.0545 1.1230 1.0124 0.7975 0.7626 0.6784 1.0637 0.6957 0.8042 0.6446 0.6289 0.6433 0.7913 0.6122 1.0193 1.1215 1.0625 1.3213 1.0519 1.1680 1.0403 1.0279 1.0196 1.1412 1.0086 0.8024 0.8097 0.7134 1.0117 0.7047 0.8232 0.6905 0.6823 0.6801 0.8103 0.6712
2308.14296#70
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
72
Methods Sports HR@5 NDCG@5 HR@10 NDCG@10 Direct Recommendation BPR-MLP P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1520 0.1765 0.0376 0.0388 0.0607 0.0782 0.0874 0.0815 0.0835 0.1115 0.0927 0.1196 0.0317 0.0267 0.0435 0.0527 0.0542 0.0557 0.0684 0.0814 0.2671 0.2235 0.0902 0.1003 0.1259 0.1475 0.1475 0.1412 0.1379 0.1769 0.1296 0.1325 0.0459 0.0502 0.0757 0.1034 0.1218 0.1272 0.1103 0.1303 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT
2308.14296#72
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
73
0.1218 0.1272 0.1103 0.1303 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0251 0.0357 0.0039 0.0130 0.0135 0.0300 0.0338 0.0316 0.0290 0.0366 0.0161 0.0289 0.0008 0.0075 0.0090 0.0138 0.0186 0.0162 0.0151 0.0240 0.0385 0.0416 0.0051 0.0207 0.0248 0.0437 0.0473 0.0448 0.0420 0.0525 0.0204 0.0324 0.0008 0.0070 0.0105 0.0247 0.0272 0.0260 0.0255 0.0320
2308.14296#73
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
75
Methods Toys HR@5 NDCG@5 HR@10 NDCG@10 Direct Recommendation BPR-MLP P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1142 0.1278 0.0114 0.0130 0.0399 0.0580 0.0636 0.0603 0.0577 0.0813 0.0688 0.0743 0.0075 0.0059 0.0233 0.0295 0.0300 0.0315 0.0432 0.0532 0.2077 0.1859 0.0638 0.0805 0.1031 0.1247 0.1257 0.1204 0.1161 0.1461 0.0988 0.1089 0.0191 0.0270 0.0542 0.0719 0.0813 0.0817 0.0828 0.0998 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT
2308.14296#75
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
76
0.0813 0.0817 0.0828 0.0998 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0443 0.0612 0.0192 0.0282 0.0285 0.0452 0.0490 0.0468 0.0442 0.0518 0.0294 0.0524 0.0158 0.0231 0.0246 0.0294 0.0342 0.0318 0.0307 0.0396 0.0700 0.0702 0.0212 0.0367 0.0408 0.0597 0.0633 0.0608 0.0580 0.0685 0.0376 0.0569 0.0165 0.0230 0.0265 0.0407 0.0432 0.0420 0.0415 0.0480
2308.14296#76
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
78
Methods Sports BLEU2 ROGUE1 ROGUE2 ROGUEL Review Summarization P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.5874 0.9024 1.2579 1.5840 1.6014 1.7125 1.6542 1.6120 1.7388 11.8971 5.7402 6.3190 6.5310 6.7125 6.7986 6.6540 6.6259 6.8130 3.0257 1.2493 1.4257 1.4390 1.5479 1.5724 1.5639 1.5029 1.6217 10.5472 3.6791 3.8912 5.0140 5.2175 5.3794 5.2960 5.1891 5.5632 Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT
2308.14296#78
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
79
Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.1412 0.0611 1.2358 0.9687 1.3874 1.3765 1.4018 1.2374 1.4287 14.0329 7.2892 9.6405 8.3097 11.0487 11.5749 11.6475 9.4294 12.0060 2.1279 0.9921 2.8723 2.1320 3.0216 2.8023 3.0107 2.5405 3.0481 11.1894 5.6923 6.2824 7.1427 8.1146 8.4256 8.6032 8.2120 9.5812
2308.14296#79
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
81
Methods Toys BLEU2 ROGUE1 ROGUE2 ROGUEL Review Summarization P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.8760 0.5941 0.8420 1.1579 1.2394 1.2668 1.2515 1.1897 1.2974 9.0351 4.4571 4.8179 5.7276 6.3395 6.3186 6.2791 6.2578 6.8352 1.5230 0.4052 0.3178 0.7158 0.9453 0.9251 0.9356 0.8976 1.1125 8.1746 4.0612 4.2889 5.5691 5.8123 5.6159 5.5976 5.8724 6.2718 Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT
2308.14296#81
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.14296
82
Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.2850 0.1379 2.0169 2.1354 2.4079 2.4565 2.4152 2.2740 2.4674 15.0416 9.7892 11.8905 11.0597 12.7987 12.8249 12.8975 11.6794 13.2560 3.6798 1.5416 3.2049 2.7590 3.5146 3.6327 3.6079 2.2460 3.6920 12.1065 5.3158 6.2689 7.1445 7.4153 7.6234 7.7112 7.2536 7.9987
2308.14296#82
RecMind: Large Language Model Powered Agent For Recommendation
Recent advancements in instructing Large Language Models (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve intricate tasks, ranging from mathematical problems to creative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent, RecMind, which is capable of providing precise personalized recommendations through careful planning, utilizing tools for obtaining external knowledge, and leveraging individual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM 'self-inspires' to consider all previously explored states to plan for next step. This mechanism greatly improves the model's ability to comprehend and utilize historical planning information for recommendation. We evaluate RecMind's performance in various recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization. Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the recommendation tasks.
http://arxiv.org/pdf/2308.14296
Yancheng Wang, Ziyan Jiang, Zheng Chen, Fan Yang, Yingxue Zhou, Eunah Cho, Xing Fan, Xiaojiang Huang, Yanbin Lu, Yingzhen Yang
cs.IR, cs.AI
null
null
cs.IR
20230828
20230828
[ { "id": "2302.13971" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2307.02046" }, { "id": "2305.15334" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2304.03442" }, { "id": "2210.03629" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2307.09288" }, { "id": "2109.01652" }, { "id": "1511.06939" } ]
2308.13724
0
3 2 0 2 g u A 6 2 ] O R . s c [ 1 v 4 2 7 3 1 . 8 0 3 2 : v i X r a # ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning # Zhehua Zhou University of Alberta [email protected] Jiayang Song University of Alberta [email protected] # Kunpeng Yao Swiss Federal Institute of Technology Lausanne (EPFL) [email protected] Zhan Shu University of Alberta [email protected] # Lei Ma The University of Tokyo University of Alberta [email protected] # Abstract
2308.13724#0
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
1
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the poten- tial to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the plan- ning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We exam- ine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to
2308.13724#1
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
2
using a validator. We exam- ine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. More- over, it also preserves the broad applicability and generalizability of working with natural language instructions. The code related to this work is available at https://github.com/zhehuazhou/ISR-LLM.
2308.13724#2
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
3
# 1 Introduction Large Language Models (LLMs), underpinned by deep learning architectures, have recently rev- olutionized artificial intelligence (AI) by demonstrating unprecedented abilities in understanding, generating, and manipulating natural language text Bommasani et al. (2021); Brown et al. (2020); Devlin et al. (2018); Radford et al. (2019); Raffel et al. (2020). This surge in LLM research has been accompanied by a growing interest in leveraging these models to tackle a diverse array of challenges across various research fields, including data analysis Agrawal et al. (2022), code generaPreprint. Under review. tion Vaithilingam et al. (2022), reasoning Zelikman et al. (2022), robotic control Ahn et al. (2022), and so on.
2308.13724#3
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
4
tion Vaithilingam et al. (2022), reasoning Zelikman et al. (2022), robotic control Ahn et al. (2022), and so on. Due to their rich internalized knowledge about the world Petroni et al. (2019); Davison et al. (2019), LLMs have also garnered considerable attention within the field of long-horizon sequential task plan- ning Roijers et al. (2013). Unlike short-term robotic planning problems, long-horizon sequential task planning often involves devising interconnected actions that are spanned over extended timeframes to achieve control objectives. Since the execution of actions at one point in time can greatly impact subsequent actions and outcomes, long-horizon planning is usually considered a more challenging problem due to its inherent intricacy in managing temporal dependencies and combinatorial com- plexity Hartmann et al. (2022), thereby necessitating innovative planning approaches that are able to balance the trade-offs between efficiency, optimality, and adaptability.
2308.13724#4
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
5
The traditional way to address long-horizon sequential task planning typically relies on first estab- lishing a symbolic and logic-based representation of the planning problem Haslum et al. (2019), and then employing techniques such as state space search Zhang (1999) or heuristic search Edelkamp and Schrödl (2011) to find a feasible solution. However, this method usually requires the manual specification of symbolic planning domains, which demands a notable degree of expertise in the field. Furthermore, many desirable properties of plans, e.g., user preferences, which can be specified in natural language by individuals without specialized training, may prove intricate or even infeasible to be encapsulated within formal logic frameworks. As a result, the adaptability of conventional methods is constrained, limiting their utility in diverse contexts.
2308.13724#5
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
6
To overcome this limitation, there is a growing trend in recent studies to explore the potential of utilizing LLMs as task-agnostic reasoning modules, with the aim of facilitating more generalized and intelligent robotic planning Ahn et al. (2022); Huang et al. (2022c). Leveraging their pre- trained knowledge, these LLM-based planners are able to effectively comprehend both explicit human-generated natural language directives and the inherent constraints interwoven within planning tasks Huang et al. (2022a). This greatly reduces the necessity for labor-intensive manual rule encoding and circumvents the need for intricate specification of symbolic planning domains Lin et al. (2023). Moreover, the intuitive nature of textual prompts allows for seamless interactions between LLM-based planners and human instructors, facilitating the integration of human expertise into the planning process. However, the efficacy and reliability of such LLM-based planners are often not satisfying due to the inherent design and training methodologies of LLMs. LLMs are essentially engineered to generate word sequences that align with human-like context, yet the assurance of their planning capabilities is not guaranteed Brown et
2308.13724#6
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
7
LLMs are essentially engineered to generate word sequences that align with human-like context, yet the assurance of their planning capabilities is not guaranteed Brown et al. (2020). Recent investigations have revealed instances where the correctness of generated actions and the success rate of task accomplishment by LLM-based planners fall short of expectations Valmeekam et al. (2022). This limitation becomes further pronounced in long-horizon sequential task planning, where complex action dependencies and extended temporal considerations introduce additional difficulties that challenge the planning abilities of LLMs.
2308.13724#7
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
8
In this work, we aim to enhance the performance of LLM in long-horizon sequential task planning. Drawing inspiration from recent research that reveals the potential for LLM improvements through self-refinement Madaan et al. (2023); Huang et al. (2022b), we propose the Iterative Self-Refined LLM (ISR-LLM) framework that utilizes the power of iterative self-refinement to improve planning outcomes. Our framework consists of three steps (see Fig. 1): (1) Preprocessing, where an LLM translator is employed to translate the natural language inputs into their respective Planning Domain Definition Language (PDDL) Haslum et al. (2019) formulations; (2) Planning, where an LLM planner takes the translated PDDL problem as input and determines the action sequence to accomplish the long-horizon sequential task planning; (3) Iterative self-refinement, where a validator is used to examine the correctness of the generated action plan and provide feedback to the LLM planner. Then based on the feedback, the LLM planner performs the iterative self-refinement process to find a revised action plan. We consider two different types of validators in our approach: an LLM-based self-validator and an external validator that leverages auxiliary verification tools.
2308.13724#8
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
9
Through comprehensive experiments across diverse planning problem domains, we show that, com- pared to state-of-the-art approaches, ISR-LLM achieves better feasibility and success rate in long- horizon sequential task planning. The contributions of this work are threefold: 2 Objective Tasks Preprocessing with LLM Translator Planning with LLM Planner Self-Refinement —_ Robotics System (@icookng) (Gieaivonn & rew-shot @ )) {ActionPin) “ivesor) | mem | GEE f 9S rer gp Paat LarceLagiee ) | ( Mens ° Domain File c Model PDDL x PDDL Standardized Problem File (BBlocksworld | Encoding Format Action ec byChain-of-thought | | Action N Validation yoEwor"t Performance t AX J | Analysis H New Plan Generation Feedback to Planner —_Error Detected Pre-execution Figure 1: Overview of the proposed ISR-LLM framework. It consists of three steps: preprocessing, planning, and iterative self-refinement. • We present ISR-LLM, a novel framework achieved by integrating a self-refinement mecha- nism into LLM. This approach addresses long-horizon sequential task planning and offers remarkable advancements in both feasibility and correctness.
2308.13724#9
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
10
• We introduce and evaluate the effectiveness of two types of validators, i.e., an LLM-based self-validator and an external validator, in providing feedback to the LLM planner for executing the iterative self-refinement process. • We highlight the superiority of our proposed framework in comparison to contemporary state-of-the-art methods, through an investigation of ISR-LLM across three diverse planning domains. # 2 Related Work # 2.1 Long-Horizon Sequential Task Planning
2308.13724#10
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
11
Long-horizon sequential task planning aims to find an optimal action sequence capable of accom- plishing a specified task objective Helmert (2006). In recent robotic studies, PDDL or Answer Set Programming (ASP) Brewka et al. (2011) are often utilized as the language for representing the planning problems Jiang et al. (2019). A prevalent method employed to tackle these planning tasks is to utilize a search-based or sampling-based algorithm to find a viable plan Levine and Humphreys (2003); Segovia-Aguas et al. (2021); Cohen et al. (2010). This strategy has found successful ap- plications across diverse robotic domains, e.g., mobile robots Zhang et al. (2015), autonomous vehicles Ding et al. (2020), and robotic manipulators Garrett et al. (2020). However, these approaches rely on a predetermined symbolic and logical representation of the planning domain, which usually demands a high level of expert knowledge for formulation. Moreover, due to the inherent abundance of potential action options associated with long-horizon sequential task planning, search-based or sampling-based strategies may encounter impediments in such scenarios. Some approaches also use example plans to construct novel plans, which are often represented through a finite state ma- chine Levesque (2005); Winner (2008). However, finding a useful example plan may be challenging or even impossible within certain task scenarios.
2308.13724#11
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
12
It is also worth mentioning that, another important category of robotic planning is Task and Motion Planning (TAMP) Garrett et al. (2021), which combines high-level task planning in discrete spaces and low-level robot motion planning in continuous space as a hierarchical planning framework. In TAMP, the focus extends beyond mere task planning to encompass the executability of the determined actions, i.e., the actions must be executable by the robot with a viable motion trajectory that is subject to both robotic and environmental constraints Toussaint (2015); Driess et al. (2019). However, how to accurately ground actions generated by LLMs into feasible robot motions remains a challenging and ongoing area of research Ahn et al. (2022); Huang et al. (2022c). Therefore, in this work, we focus only on exploring the task planning capabilities of LLMs. # 2.2 Planning with LLM
2308.13724#12
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
13
# 2.2 Planning with LLM To overcome the limited generalizability of traditional task planners, researchers have started inves- tigating the possibility of utilizing LLMs as task-agnostic planners Sharma et al. (2021); Li et al. (2022); Zeng et al. (2022); Singh et al. (2023). A multitude of studies have delved into grounding the language commands generated by LLMs to executable robotic actions Ahn et al. (2022); Huang et al. (2022c); Ding et al. (2023); Lin et al. (2023). For instance, in Ahn et al. (2022), scores are assigned to potential actions through a value function, and the action with the highest likelihood of 3 success is selected. Similarly, Huang et al. (2022a) adopts prompt engineering to extract actions that are executable for the robots. In Huang et al. (2022c), environmental feedback is introduced to enable online adjustment of action plans that are infeasible for the robots. Although the focus of this work is not the grounding of actions, these studies illustrate the competencies of LLMs in addressing diverse robotic planning tasks.
2308.13724#13
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
14
Besides grounding language instructions, recent studies have also sought to combine LLMs with PDDL as a means of elevating the performance of LLM-based planners Valmeekam et al. (2022); Silver et al. (2022, 2023); Liu et al. (2023). In Valmeekam et al. (2022), a Blocksworld Slaney and Thiébaux (2001) benchmark is proposed to assess the LLM’s capability in handling natural language inputs for planning. However, the results reveal a discouraging performance of LLMs in long-horizon task planning, even within seemingly uncomplicated tasks. In Silver et al. (2022, 2023), instead of natural language inputs, planning problems in PDDL syntax are directly presented to LLMs for generating action sequences. While this strategy contributes to enhanced performance, it inevitably diminishes the LLM’s generalizability and often demands additional effort and expert knowledge for composing the corresponding PDDL files. In Liu et al. (2023), LLM is employed not as a planner, but rather as a translator that converts natural language inputs into PDDL problems, which are subsequently solved using classical
2308.13724#14
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
15
LLM is employed not as a planner, but rather as a translator that converts natural language inputs into PDDL problems, which are subsequently solved using classical PDDL planners. However, such an approach requires an external solver, potentially impeding the wider applicability of LLMs as task-agnostic planners. An analogous notion akin to our self-refinement concept is introduced in Raman et al. (2022). After the generation of an action plan based on natural language inputs, it collects the error information returned from the execution of the plan. This information is then constructed as re-prompts that direct the LLM towards correcting the erroneous actions. However, such a refinement process occurs subsequent to the action execution phase. Our approach, in comparison, not only considers the utilization of an external validator to perform a similar self-refinement process, but also investigates the potential of LLMs for enabling pre-execution action corrections through self-validation capabilities.
2308.13724#15
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
17
# 3.1 Task Planning In this work, we consider the problem of task planning in a setting with discrete and fully observable states, finite actions, and deterministic transitions. Such a problem P is often represented by a tuple P = ⟨S, A, T, sinit, G⟩. For each state s ∈ S within the discrete set of states S, an action a ∈ A can be selected from the set of applicable actions A(s) ⊆ A, i.e., the preconditions of the action a must be fulfilled. The transition function T : S × A → S determines the next state based on the current state and the selected action. sinit ∈ S represents the initial state and G ⊆ S is a set of goal states. A solution to the planning problem P is a sequential action plan π = (a1, a2, . . . , an) that controls the initial state sinit to a goal state, i.e., we have si+1 = T (si, ai) satisfied for all 0 ≤ i ≤ n and sn+1 ∈ G. For long-horizon sequential task planning, the number of actions n tends to be relatively large. In this work, we focus on investigating the capabilities of LLM in solving the designated task planning problem P . Thus, our primary focus is the feasibility and success rate of planning rather than its optimality.
2308.13724#17
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
18
# 3.2 PDDL PDDL is a standardized encoding format designed for classical planning problems Aeronautiques et al. (1998); Fox and Long (2003). A planning problem P represented in PDDL syntax consists of two files: a domain file and a problem file. The domain file embodies the foundational rules of the planning domain. It not only defines the predicates that elucidate the configuration of the state space S, but also formulates the preconditions and effects of all possible actions a ∈ A, i.e., the transition function T . The problem file is used to define the available objects within the planning domain, as well as the initial state and goal conditions. Concrete examples of PDDL domain and problem files for the experiments considered in this work can be found in Appendix A.1. In this work, we assume that the natural language input provided to the LLM should include both the initial state and the goal conditions, such that the LLM translator is able to convert it into corresponding PDDL files. For more details about PDDL, we direct the interested readers to Haslum et al. (2019). 4 # 4 ISR-LLM
2308.13724#18
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
19
4 # 4 ISR-LLM In this section, we introduce ISR-LLM, a novel framework that utilizes iterative self-refinement to find an action plan with improved accuracy and feasibility. It includes three steps: preprocessing with an LLM translator, planning with an LLM planner, and iterative self-refinement loop with a validator that is selected from either an LLM-based self-validator or an external validator. Details are explained as follows. # 4.1 Preprocessing with LLM Translator
2308.13724#19
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
20
# 4.1 Preprocessing with LLM Translator As illustrated in Fig. 1, the LLM translator first converts the given natural language instructions into a PDDL formulation, specifically representing them using the domain and problem files. The rationale for employing such a translator is grounded in its notable advantages, even though an LLM planner could be designed to operate directly on natural language inputs, as demonstrated in Lin et al. (2023). The adoption of a formal representation, i.e., PDDL, offers twofold benefits to the subsequent validation process of the generated plan. Firstly, it enables the usage of existing PDDL validators as the external validator, e.g., VAL Howey et al. (2004) or PDDL.lj Zhi-Xuan (2022). This obviates the necessity of developing a custom validator and thereby saves substantial time and effort. Secondly, rather than relying solely on language cues, this approach enables the LLM-based self-validator to acquire a comprehension akin to a state-machine understanding of the system state. This, in turn, facilitates a more precise evaluation of the correctness of the selected actions.
2308.13724#20
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
21
In order to ensure the structural accuracy of the translated PDDL files, we adopt a technique known as few-shot in-context learning Brown et al. (2020). This technique involves embedding illustrative examples within the prompt, effectively instructing the LLM on how to formulate responses to given queries in a desired manner. Similar to Liu et al. (2023), we assume that the domain-specific knowledge pertinent to each considered planning task is available in advance, and thus include it within the few-shot examples provided to the LLM translator. An example of the prompt presented to the LLM translator for the Blocksworld planning domain (see Sec. 5.1 for a detailed explanation about this domain) is shown in Fig. 2, and a complete list of all employed few-shot examples within this work is given in Appendix A.1. # 4.2 Planning with LLM Planner
2308.13724#21
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
22
# 4.2 Planning with LLM Planner Once the natural language input is translated, the LLM planner takes these PDDL files as inputs and determines an action sequence aimed at achieving the given task (see Fig. 1). In addition to few-shot in-context learning, we also integrate the Chain-of-Thought (CoT) technique Wei et al. (2022) into the prompts provided to the LLM planner. CoT operates by decomposing the overall problem into intermediate steps, thus enabling the LLM to tackle complex reasoning problems that may not be solvable via standard prompting methods. An illustrative example of the prompt presented to the LLM planner is given in Fig. 2, and a comprehensive list of all the employed few-shot examples is accessible in Appendix A.2. Within this step, we obtain an initial action plan for addressing the given planning problem. Subse- quently, as detailed in the next subsection, such an initial plan is examined by a validator. Utilizing the feedback received from the validator, the LLM planner performs a self-refinement to find a new plan that attempts to correct erroneous actions. # Iterative Self-Refinement Loop with Validator
2308.13724#22
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
23
# Iterative Self-Refinement Loop with Validator The central component of the iterative self-refinement loop is the validator, as demonstrated in Fig. 1. Through the examination of the generated action sequence, the validator constructs feedback, pinpointing any actions considered incorrect, and subsequently conveys this information to the LLM planner. Then based on the feedback, the LLM planner initiates a self-refinement process to rectify the incorrect action and devise a new action plan. Note that, while the generated action sequence may contain multiple errors, analyzing actions subsequent to the initial error is often unnecessary, since the first error could potentially render the foundation of all ensuing actions fundamentally flawed. Thus, the self-refinement process is executed iteratively within a loop, where in each step, the validator stops at the first identified error. The information concerning this error is then returned, ensuring that each iterative stage is solely focused on rectifying this detected mistake. The iterative 5 _' (
2308.13724#23
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
24
' [Question] Step 1: Preprocessing with the LLM translator Step 2: Planning with the LLM planner Step 3.2: Iterative Self- Refinement (re-planninig) Refinement (feedback from self-validator) [Few-shot Example Question] [Few-shot Example Question] __' (Few-shot Example Question] I have 3 blocks. Initially: Block b1 _, Domain file: | Block initial state: ' (Append the previous prompt to is on the table. Block b2 is on the _| (define (domain blocksworld) | (on-table b1) ' the LLM planner with the feedback table. Block b3 is on top of bt. _, (‘predicates ...) 1 (on-table b2) ' obtained from the validator) Your goal is to move the blocks —_, (action pickup ...) 1 (on b3 b1) , [Few-Shot Example Question from such that they are stacked in the 1 Goal state: 1 Step 2] order: b1 on b2, b2 on b3, and b3 1) 1(on bt b2) Domain file: eatetie. 1 Problem file: [Few-shot Example Answer] _| (define (problem threeblocks) Domain file: (define (domain blocksworld) | (predicates ...) |
2308.13724#24
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
25
eatetie. 1 Problem file: [Few-shot Example Answer] _| (define (problem threeblocks) Domain file: (define (domain blocksworld) | (predicates ...) | [Few-shot Example Answer] 1 [Few-shot Example Answer from | (action pickup ...) ' We need to build the blocks from 1 Step 2] | bottom to top. [Few-shot Example Answer] _' We need to build the blocks from 4) ! Third goal: b3 on table jal: b1 on the table, b2 on the _' bottom to top. | Problem file: ' (unstack b3 b1) "table, b3 on bt ee | (define (problem threeblocks) (putdown b3) ' (unstack b2 b1) result: the action is ' =]. ' Second goal: b2 on b3 ' wrong since b2 is not on top of b1 | [Question from Step 2] ) (pickup b2) {analysis stops due to error | Translated PDDL domain and 1 (Stack b2 b3) \ Final answer: {problem files [Question] 1 First goal: b1 on b2 1 No, the action sequence is wrong, , I have 4 blocks. Initially: Block b1 _ (pickup b1) tit cannot accomplish the goal, _, [Feedback History from Step
2308.13724#25
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
26
sequence is wrong, , I have 4 blocks. Initially: Block b1 _ (pickup b1) tit cannot accomplish the goal, _, [Feedback History from Step 3.1] is on top of b2. Block b2 is on top (stack b1 b2) 1 | (Previous feedback) of b4. Block b3 is on top ofb1. 1 [Question] ie Block b4 is on the table. Your goal 1 nitial state and goal conditions _ (latest Feedback) is to move the blocks such that 1 [Question] 1 extracted from the translated 1 The self-validation suggests an they are stacked in the order: b2 1 Translated PDDL domain and DDL files from step 1 + terror, please find a new plan. on b1, b1 on b4, b4 on b3, and b3 | problem files from step 1 1 on table. 1 lal ' ! ' 1 1 ' ! 1 ' 1 51 ' ' " ' 1 ! 1 ! 1 LU 1 LU 1 ' 1 1 1 1 ' Step 3.1: Iterative Self- 1 1
2308.13724#26
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
27
2 3 3 = 2 E 2 or BE 8 8 Figure 2: Examples of the prompts used in ISR-LLM. The prompt provided to the LLM contains two parts: the few-shot examples (shaded with a yellow color) and the actual question (blue). Details about the few-shot examples are given in Appendix A. The texts shaded with a green color represent the LLM’s responses. The LLM translator first converts the natural language instructions into PDDL domain and problem files. Then, an initial plan is generated using the translated files, which is subsequently revised through an iterative self-refinement process. self-refinement loop persists until either the validator identifies no errors or a predefined maximum number of iterations is reached. The action sequence, resulting from the iterative self-refinement loop, is then accepted as the final generated action sequence.
2308.13724#27
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
28
We consider two types of validators: a self-validator, which employs the LLM to assess the correctness of the generated action plan, and an external validator, which leverages external tools for performing the analysis. It is worth mentioning that, although the external validator is capable of providing accurate feedback on the feasibility of the generated plan, its implementation often demands a considerable amount of effort and may be unavailable for certain tasks. Conversely, the usage of an LLM as an internal self-validator economizes both time and effort. However, it has the inherent risk of possibly yielding imprecise or even erroneous feedback. The selection of the validator type, therefore, hinges upon the specific evaluation requirements and the context of the validation scenario. An example of the prompts provided to the LLM-based self-validator is shown in Fig. 2, where few-shot learning and CoT techniques are also employed. All examples used for the experimental domains explored in this work are given in Appendix A.3. # 5 Experimental Results To evaluate the performance of ISR-LLM in long-horizon sequential task planning, we perform experiments across three diverse planning domains. Moreover, we also investigate the influence of different LLMs on the performance of ISR-LLM, as well as the impact of the LLM translator. A detailed explanation of the experimental setup and results is provided in the following subsections. 6
2308.13724#28
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
29
6 Initial State 23456 Pot 1 Pot 2 Pot 3 Goal Conditions Pot 3 (a) Cooking Initial State 4 | 4] [3] [2] # Goal Conditions 4 | 2 | # (a) Cooking (b) Blocksworld Initial State Goal Conditions (c) Ball Moving Figure 3: Three planning domains used in this work. # 5.1 Experimental Setup We utilize the following three planning domains as benchmark problems to evaluate the performance of ISR-LLM. These domains are derived from existing literature and are extensively employed in planning research Liu et al. (2023); Silver et al. (2023); Valmeekam et al. (2022); Silver et al. (2022). Detailed examples about each planning domain are presented in Appendix A. • Cooking: There are n pots and a total of 6 different ingredients (see Fig. 3a). The robot’s task is to add ingredients to each pot according to a prescribed recipe. Each pot possesses its own randomly generated recipe, which stipulates the inclusion of 2 to 4 different ingredients. The robot has three actions: picking up an ingredient, putting down an ingredient, and adding the ingredient to a pot. A constraint that must be fulfilled is that each ingredient may only be retrieved once by the robot, i.e., once the robot has picked up an ingredient, it must distribute it to all pots that require this ingredient as per their individual recipes.
2308.13724#29
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
30
• Blocksworld: There are n blocks, initially randomly placed on a table. The objective of the robot is to assemble these blocks into a stack, adhering to a specific prescribed order (see Fig. 3b). The robot has four actions: picking up a block that is on the table, putting down a block that is currently in its hand onto the table, unstacking a block from the top of another block to hold it in its hand, and stacking the block that is currently in its hand on top of another block. However, the robot can only manipulate one block at a time, i.e., any block that has other blocks situated on top of it is considered fixed. • Ball Moving: There are n balls, initially randomly distributed among 4 rooms (see Fig. 3c). The robot needs to relocate the balls to their predefined goal rooms, with the constraint that it can hold no more than one ball at a time. The robot has three actions: picking up a ball, putting down a ball, and moving from its current room to another room. 7 Table 1: Success rate of ISR-LLM in different planning domains.
2308.13724#30
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
31
7 Table 1: Success rate of ISR-LLM in different planning domains. Planning domain LLM-direct GPT3.5 ISR-LLM-self ISR-LLM-external LLM-direct GPT4 ISR-LLM-self ISR-LLM-external Cooking (n = 3) Cooking (n = 4) Blocksworld (n = 3) Blocksworld (n = 4) Ball Moving (n = 3) Ball Moving (n = 4) 47% 40% 20% 10% 33% 17% 67% 53% 37% 17% 50% 27% 100% 63% 70% 53% 70% 57% 100% 100% 43% 40% 93% 90% 100% 100% 60% 60% 100% 93% 100% 100% 97% 80% 100% 97% For all three planning domains, we investigate two specific cases with n = 3 and n = 4, to examine the influence of the number of objects, which is directly correlated with the complexity of the task, on the performance of the proposed ISR-LLM framework. Furthermore, to evaluate the impacts of various LLMs on the planning outcomes, we employ two LLMs, namely GPT3.5 and GPT4, and compare their capabilities in task planning within the ISR-LLM framework.
2308.13724#31
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
32
For each planning task, we evaluate three different methods: (1) LLM-direct, which is the baseline approach grounded in Silver et al. (2023, 2022); Valmeekam et al. (2022). It leverages the LLM to formulate an action plan directly from the given PDDL input. To ensure a fair comparison with ISR- LLM, we utilize the LLM translator to convert natural language inputs into PDDL files in this method. (2) ISR-LLM-self, which employs the ISR-LLM framework with an LLM-based self-validator; (3) ISR-LLM-external, which incorporates an external validator to generate feedback for ISR-LLM. In order to mitigate the influence of existing PDDL validators and focus on analyzing the performance of ISR-LLM, we implement our own custom external validators in this work.
2308.13724#32
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
33
We randomly generate 30 unique cases with varying initial states and goal conditions for each planning task. The few-show examples used for the LLM translator, the LLM planner, and the LLM-based self-validator are given in Appendix A. All LLM’s responses during the experiments are presented in our website1. The success rates of task accomplishments for the three aforementioned methods are recorded. All experiments are conducted on a laptop equipped with an Intel(R) Core(TM) i7-10870H CPU @ 2.20GHz Processor with 8 CPUs, and an NVIDIA RTX 3080 Max-Q GPU with 16 GB VRAM. The detailed results are presented in the next subsection. # 5.2 Performance of ISR-LLM The results of the experiments are summarized in Table 1. In the cases utilizing GPT3.5, the proposed ISR-LLM framework demonstrates a notable enhancement in success rates across all planning domains when compared to the baseline approach. While the LLM-based self-validator contributes to an approximate 15% increase in performance, the external validator can further amplify the success rate by roughly 40% to 50%. The only exception occurs in the case n = 4 for the Cooking domain, where a 23% increase is observed. This might be attributed to the excessive number of required actions in this planning task, rendering LLMs less effective at correcting errors.
2308.13724#33
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
34
The success rates are also influenced by task complexity, as indicated by the number of objects. Increases in object numbers correspond to decreased success rates in the Cooking, Blocksworld, and Ball Moving domains for all three approaches (LLM-direct: −7%, −10%, −16%; ISR-LLM-self: −14%, −20%, −23%; ISR-LLM-external:−37%, −17%, −13%). This trend reflects the increased difficulty in rectifying erroneous actions as the planning horizon extends. Moreover, the success rate varies among planning domains. Compared to the Cooking and the Ball Moving domains, the Blocksworld domain, which demands more sophisticated logical thinking, demonstrates lower success rates. Nevertheless, the proposed ISR-LLM is still able to improve the planning outcomes within this domain. It can also be observed that GPT4 greatly outperforms GPT3.5 in long-horizon sequential task planning, corroborating the common assertion that GPT4 possesses a markedly superior reasoning capability. The baseline method, i.e., LLM-direct, when coupled with GPT4, is able to achieve a success rate exceeding 90% in the Cooking and the Ball Moving domains, where ISR-LLM also maintains this high performance level. However, in the more logically complex Blocksworld domain, GPT4 demonstrates diminished performance using the baseline approach. Nevertheless,
2308.13724#34
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
37
(e) Pick up b1 (f) Stack b1 on b3 (g) Pick up b4 (h) Stack b4 on b1 Figure 4: Grounding of actions in the Blocksworld domain with four blocks. Initially, block b2 (red), b3 (green), b4 (pink) are on the table, and block b1 (blue) is on top of block b2. The goal is to stack the blocks in the given order: b4 on b1, b1 on b3, b3 on b2, and b2 on the table. the employment of ISR-LLM also elevates the success rate for this domain, with the self-validator contributing an increase of about 20%, and the external validator enhancing it by more than 40%. Interestingly, the influence of the number of objects appears to be less pronounced when GPT4 is utilized. This may be attributed to GPT4’s enhanced reasoning capabilities, which facilitate more effective logical thinking, and thereby mitigate the impact of the number of objects on the results. # Influence of the LLM Translator
2308.13724#37
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
38
# Influence of the LLM Translator We also evaluate the influence of the LLM translator using the Blocksworld domain with n = 3 and GPT3.5 as an example, as this case demonstrates where the efficacy of ISR-LLM is most obvious. By omitting the LLM translator and directly utilizing natural language input, we compare the success rates of task planning and present the results in Table 2. It can be observed that, while the LLM translator slightly improves the planning performance of the baseline approach, the self-validator greatly benefits from the translator, showing a 20% increase in the success rate. The reason could be that the translated PDDL files offer a symbolic and logical representation of the planning domain, thereby allowing the LLM to form a more concrete understanding of the system state, as opposed to relying solely on linguistic cues. In contrast, the performance of the external validator remains relatively consistent, irrespective of the presence of the LLM translator. This consistency arises from our custom validator’s ability to provide accurate feedback, whether PDDL formulations are employed or not. However, as previously mentioned, introducing translated PDDL files enables the usage of existing PDDL validators, potentially saving substantial time and effort needed for implementing a custom validator. 9 # 5.4 Grounding the Actions
2308.13724#38
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
39
9 # 5.4 Grounding the Actions Although it is beyond the scope of this work, we further demonstrate that the generated action plan can be directly grounded into feasible robot actions when paired with a suitable motion planner. This highlights another advantage of employing the LLM translator within the ISR-LLM framework, as the use of PDDL formulation ensures that each generated action conforms to a predefined definition and structure. Consequently, this simplifies the task of the motion planner in converting the action plan into executable robot movements. Figure 4 illustrates this grounding process, using an example from the Blocksworld domain with four blocks. Here, a pick-and-place controller is employed to execute the four different types of actions, assuming the robot knows the locations of the blocks. The simulation is conducted in NVIDIA Omniverse Isaac Sim2. # 6 Discussion
2308.13724#39
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
40
# 6 Discussion Self-Validator and External Validator Generally, the external validator is capable of providing feedback to a degree of precision that identifies the exact action in which an error resides. Conversely, the self-validator usually only provides an overarching estimation regarding the correctness of the entire generated action plan. As a consequence, the external validator often leads to superior performance, as precise feedback can greatly facilitate the correction of erroneous actions. This benefit becomes more obvious as the planning horizon extends, or when complex logical thinking is demanded. However, as aforementioned, the external validator requires additional design and implementation effort. In contrast, the self-validator is advantageous in that it can be easily and directly employed without necessitating extra work. Therefore, the selection between these validator types should be carefully considered in light of the specific task requirements and the resources available.
2308.13724#40
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
41
Planning Domains The planning capabilities of LLMs are influenced by the inherent characteristics of the planning domains. As observed from our experimental results, LLMs appear to excel in planning tasks that focus on adhering to specific instructions, such as Cooking, or performing repeated actions with identifiable patterns, e.g., Ball Moving. Conversely, when the planning tasks demand more complex logical thinking, as seen in the Blocksworld domain, their planning performance tends to diminish. This phenomenon is more pronounced in the GPT4 cases. The underlying reason could be that LLMs are essentially trained to generate word sequences that mirror human-like thought processes, which suits tasks requiring instruction or pattern following. However, when critical logical reasoning becomes a vital component of the task, the inherent reasoning abilities of the LLMs become more important. This suggests that enhancing the reasoning capabilities of LLMs could be a priority when aiming to utilize them as planners for more intricate planning tasks.
2308.13724#41
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
42
Limitations One limitation of the current LLM-based planners - even with the proposed ISR- LLM framework - is that the overall success rate often fails to exceed that of traditional search- based planners. However, as an initial exploratory work, we demonstrate the potential of utilizing LLM as a versatile and task-agnostic planner. This could significantly facilitate the deployment of various robotic systems across diverse scenarios and minimize the required effort in planning system design. Moreover, the planning abilities of the ISR-LLM framework may see substantial improvements through refinements in the underlying reasoning capabilities of the LLMs. This could be potentially achieved through parameter fine-tuning technologies, such as integrating a fine-tuned LLM specifically designed for task planning. Another limitation stems from the inherent randomness within LLMs, complicating assurances such as correctness or constraint satisfaction in the generated action plan. Therefore, the employment of LLMs may be inappropriate for certain tasks, especially those that are safety-critical. # 7 Conclusion In this paper, we explore the potential of leveraging LLMs for long-horizon sequential task planning based on natural language input. To improve the correctness of the generated action plan, we introduce the ISR-LLM framework, which employs an iterative self-refinement approach for automatic plan # 2https://developer.nvidia.com/isaac-sim 10
2308.13724#42
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
43
# 2https://developer.nvidia.com/isaac-sim 10 revisions. This framework consists of three steps. First, an LLM translator converts the natural language input into a PDDL formulation, represented by PDDL files. Second, using these translated PDDL files, an LLM planner formulates an initial action plan. Third, an iterative self-refinement loop is initiated, wherein either an LLM-based self-validator or an external validator provides feedback on the correctness of the action plan, allowing the LLM planner to make necessary revisions to the action plan. Through extensive experiments across three diverse planning domains, we demonstrate that ISR-LLM surpasses the performance of existing state-of-the-art LLM-based planners in long- horizon sequential task planning. While maintaining the flexibility and generalizability to work with natural language input, our ISR-LLM framework consistently achieves high success rates in task accomplishments. For future work, we plan to incorporate motion planning within the ISR-LLM framework, aiming to facilitate reliable and efficient task and motion planning across various robotic application scenarios. # References
2308.13724#43
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
44
# References Constructions Aeronautiques, Adele Howe, Craig Knoblock, ISI Drew McDermott, Ashwin Ram, Manuela Veloso, Daniel Weld, David Wilkins SRI, Anthony Barrett, Dave Christianson, et al. 1998. Pddl| the planning domain definition language. Technical Report, Tech. Rep. (1998). Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689 (2022). Michael Ahn, Anthony Brohan, Noah Brown, Yevgen Chebotar, Omar Cortes, Byron David, Chelsea Finn, Chuyuan Fu, Keerthana Gopalakrishnan, Karol Hausman, et al. 2022. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691 (2022).
2308.13724#44
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
45
Rishi Bommasani, Drew A Hudson, Ehsan Adeli, Russ Altman, Simran Arora, Sydney von Arx, Michael S Bernstein, Jeannette Bohg, Antoine Bosselut, Emma Brunskill, et al. 2021. On the opportunities and risks of foundation models. arXiv preprint arXiv:2108.07258 (2021). Gerhard Brewka, Thomas Eiter, and Mirosław Truszczy´nski. 2011. Answer set programming at a glance. Commun. ACM 54, 12 (2011), 92–103. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. Benjamin J Cohen, Sachin Chitta, and Maxim Likhachev. 2010. Search-based planning for manipula- tion with motion primitives. In 2010 IEEE international conference on robotics and automation. IEEE, 2902–2908.
2308.13724#45
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
46
Joe Davison, Joshua Feldman, and Alexander M Rush. 2019. Commonsense knowledge mining from pretrained models. In Proceedings of the 2019 conference on empirical methods in natural language processing and the 9th international joint conference on natural language processing (EMNLP-IJCNLP). 1173–1178. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018). Yan Ding, Xiaohan Zhang, Chris Paxton, and Shiqi Zhang. 2023. Task and motion planning with large language models for object rearrangement. arXiv preprint arXiv:2303.06247 (2023). Yan Ding, Xiaohan Zhang, Xingyue Zhan, and Shiqi Zhang. 2020. Task-motion planning for safe and efficient urban driving. In 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). IEEE, 2119–2125. Danny Driess, Ozgur Oguz, and Marc Toussaint. 2019. Hierarchical task and motion planning using logic-geometric programming (hlgp). In RSS Workshop on Robust Task and Motion Planning. Stefan Edelkamp and Stefan Schrödl. 2011. Heuristic search: theory and applications. Elsevier. 11
2308.13724#46
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
47
Stefan Edelkamp and Stefan Schrödl. 2011. Heuristic search: theory and applications. Elsevier. 11 Maria Fox and Derek Long. 2003. PDDL2. 1: An extension to PDDL for expressing temporal planning domains. Journal of artificial intelligence research 20 (2003), 61–124. Caelan Reed Garrett, Rohan Chitnis, Rachel Holladay, Beomjoon Kim, Tom Silver, Leslie Pack Kaelbling, and Tomás Lozano-Pérez. 2021. Integrated task and motion planning. Annual review of control, robotics, and autonomous systems 4 (2021), 265–293. Caelan Reed Garrett, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2020. Pddlstream: Integrating symbolic planners and blackbox samplers via optimistic adaptive planning. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 30. 440–448. Valentin N Hartmann, Andreas Orthey, Danny Driess, Ozgur S Oguz, and Marc Toussaint. 2022. Long-horizon multi-robot rearrangement planning for construction assembly. IEEE Transactions on Robotics 39, 1 (2022), 239–252.
2308.13724#47
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
48
Patrik Haslum, Nir Lipovetzky, Daniele Magazzeni, Christian Muise, Ronald Brachman, Francesca Rossi, and Peter Stone. 2019. An introduction to the planning domain definition language. Vol. 13. Springer. Malte Helmert. 2006. The fast downward planning system. Journal of Artificial Intelligence Research 26 (2006), 191–246. Richard Howey, Derek Long, and Maria Fox. 2004. VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL. In 16th IEEE International Conference on Tools with Artificial Intelligence. IEEE, 294–301. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022b. Large language models can self-improve. arXiv preprint arXiv:2210.11610 (2022). Wenlong Huang, Pieter Abbeel, Deepak Pathak, and Igor Mordatch. 2022a. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning. PMLR, 9118–9147.
2308.13724#48
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
49
Wenlong Huang, Fei Xia, Ted Xiao, Harris Chan, Jacky Liang, Pete Florence, Andy Zeng, Jonathan Tompson, Igor Mordatch, Yevgen Chebotar, et al. 2022c. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608 (2022). Yu-qian Jiang, Shi-qi Zhang, Piyush Khandelwal, and Peter Stone. 2019. Task planning in robotics: an empirical comparison of pddl-and asp-based systems. Frontiers of Information Technology & Electronic Engineering 20 (2019), 363–373. Hector J Levesque. 2005. Planning with loops. In IJCAI. 509–515. John Levine and David Humphreys. 2003. Learning action strategies for planning domains using genetic programming. In Workshops on Applications of Evolutionary Computation. Springer, 684–695. Shuang Li, Xavier Puig, Chris Paxton, Yilun Du, Clinton Wang, Linxi Fan, Tao Chen, De-An Huang, Ekin Akyürek, Anima Anandkumar, et al. 2022. Pre-trained language models for interactive decision-making. Advances in Neural Information Processing Systems 35 (2022), 31199–31212.
2308.13724#49
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
50
Kevin Lin, Christopher Agia, Toki Migimatsu, Marco Pavone, and Jeannette Bohg. 2023. Text2motion: From natural language instructions to feasible plans. arXiv preprint arXiv:2303.12153 (2023). Bo Liu, Yuqian Jiang, Xiaohan Zhang, Qiang Liu, Shiqi Zhang, Joydeep Biswas, and Peter Stone. 2023. Llm+ p: Empowering large language models with optimal planning proficiency. arXiv preprint arXiv:2304.11477 (2023). Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651 (2023). Fabio Petroni, Tim Rocktäschel, Patrick Lewis, Anton Bakhtin, Yuxiang Wu, Alexander H Miller, and Sebastian Riedel. 2019. Language models as knowledge bases? arXiv preprint arXiv:1909.01066 (2019). 12
2308.13724#50
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
51
12 Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. 2019. Language models are unsupervised multitask learners. OpenAI blog 1, 8 (2019), 9. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2020. Exploring the limits of transfer learning with a unified text-to-text transformer. The Journal of Machine Learning Research 21, 1 (2020), 5485–5551. Shreyas Sundara Raman, Vanya Cohen, Eric Rosen, Ifrah Idrees, David Paulius, and Stefanie Tellex. 2022. Planning with large language models via corrective re-prompting. arXiv preprint arXiv:2211.09935 (2022). Diederik M Roijers, Peter Vamplew, Shimon Whiteson, and Richard Dazeley. 2013. A survey of multi-objective sequential decision-making. Journal of Artificial Intelligence Research 48 (2013), 67–113.
2308.13724#51
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
52
Javier Segovia-Aguas, Sergio Jiménez, and Anders Jonsson. 2021. Generalized planning as heuristic search. In Proceedings of the International Conference on Automated Planning and Scheduling, Vol. 31. 569–577. Pratyusha Sharma, Antonio Torralba, and Jacob Andreas. 2021. Skill induction and planning with latent language. arXiv preprint arXiv:2110.01517 (2021). Tom Silver, Soham Dan, Kavitha Srinivas, Joshua B Tenenbaum, Leslie Pack Kaelbling, and Michael Katz. 2023. Generalized Planning in PDDL Domains with Pretrained Large Language Models. arXiv preprint arXiv:2305.11014 (2023). Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. 2022. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop.
2308.13724#52
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
53
Ishika Singh, Valts Blukis, Arsalan Mousavian, Ankit Goyal, Danfei Xu, Jonathan Tremblay, Dieter Fox, Jesse Thomason, and Animesh Garg. 2023. Progprompt: Generating situated robot task plans using large language models. In 2023 IEEE International Conference on Robotics and Automation (ICRA). IEEE, 11523–11530. John Slaney and Sylvie Thiébaux. 2001. Blocks world revisited. Artificial Intelligence 125, 1-2 (2001), 119–153. Marc Toussaint. 2015. Logic-Geometric Programming: An Optimization-Based Approach to Com- bined Task and Motion Planning.. In IJCAI. 1930–1936. Priyan Vaithilingam, Tianyi Zhang, and Elena L Glassman. 2022. Expectation vs. experience: Evaluating the usability of code generation tools powered by large language models. In Chi conference on human factors in computing systems extended abstracts. 1–7. Karthik Valmeekam, Alberto Olmo, Sarath Sreedharan, and Subbarao Kambhampati. 2022. Large Language Models Still Can’t Plan (A Benchmark for LLMs on Planning and Reasoning about Change). arXiv preprint arXiv:2206.10498 (2022).
2308.13724#53
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
54
Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837. Elly Zoe Winner. 2008. Learning domain-specific planners from example plans. Ph. D. Dissertation. Carnegie Mellon University. Eric Zelikman, Yuhuai Wu, Jesse Mu, and Noah Goodman. 2022. Star: Bootstrapping reasoning with reasoning. Advances in Neural Information Processing Systems 35 (2022), 15476–15488. Andy Zeng, Maria Attarian, Brian Ichter, Krzysztof Choromanski, Adrian Wong, Stefan Welker, Federico Tombari, Aveek Purohit, Michael Ryoo, Vikas Sindhwani, et al. 2022. Socratic models: Composing zero-shot multimodal reasoning with language. arXiv preprint arXiv:2204.00598 (2022). 13 Shiqi Zhang, Fangkai Yang, Piyush Khandelwal, and Peter Stone. 2015. Mobile Robot Planning Using Action Language with an Abstraction Hierarchy. In International Conference on Logic Programming and Nonmonotonic Reasoning. Springer, 502–516.
2308.13724#54
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
55
Weixiong Zhang. 1999. State-space search: Algorithms, complexity, extensions, and applications. Springer Science & Business Media. Tan Zhi-Xuan. 2022. PDDL. jl: An Extensible Interpreter and Compiler Interface for Fast and Flexible AI Planning. Ph. D. Dissertation. Massachusetts Institute of Technology. 14 # A Prompt Few-Shot Examples # A.1 Translator Cooking: For the Cooking planning domain, we use a total of three few-shot examples, which are given as follows. Example 1
2308.13724#55
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
56
Question: I have 3 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient1, ingredient4. pot2 contains ingredient2, ingredient3, ingredient5, ingredient6. pot3 contains ingredient1, ingredient4, ingredient5. Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not
2308.13724#56
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
57
) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem threepots) (:domain cooking) (:objects pot1 pot2 pot3 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) ) (:goal (and (contain pot1 ingredient1) (contain pot1 ingredient4) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient5) (contain pot2 ingredient6)
2308.13724#57
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
59
Question: I have 4 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient2, ingredient3, ingredient6. pot2 contains ingredient1, ingredient2, ingredient5. pot3 contains ingredient5, ingredient6. pot4 contains ingredient2, ingredient4, ingredient6. Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i
2308.13724#59
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
60
-ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fourpots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) ) (:goal (and (contain pot1 ingredient2) (contain pot1 ingredient3) (contain pot1 ingredient6) (contain pot2 ingredient1) (contain pot2 ingredient2) (contain pot2 ingredient5)
2308.13724#60
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
62
Question: I have 5 pots and 6 different ingredients. Each ingredient can only be picked up once. Your goal is to add ingredients to pots by following the receipts: pot1 contains ingredient3, ingredient4, ingredient5. pot2 contains ingredient1, ingredient2, ingredient3, ingredient6. pot3 contains ingredient2, ingredient3, ingredient5. pot4 contains ingredient1, ingredient6. pot5 contains ingredient1, ingredient3, ingredient4, ingredient5. Answer: Domain file: (define (domain cooking) (:types pot ingredient) (:predicates (arm-empty) (pot-empty ?p - pot) (holding ?i - ingredient) (contain ?p - pot ?i - ingredient) ) (:action pick :parameters (?i - ingredient) :precondition (and (arm-empty)) :effect (and (holding ?i -ingredient) (not (arm-empty))) ) (:action putdown :parameters (?i - ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty))
2308.13724#62
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
63
- ingredient) :precondition (and (holding ?i -ingredient)) :effect (and (not (holding ?i -ingredient)) (arm-empty)) ) (:action add :parameters (?i - ingredient ?p - pot) :precondition (and (holding ?i -ingredient)) :effect (and (not (pot-empty ?p)) (contain ?p ?i)) ) Problem file: (define (problem fivepots) (:domain cooking) (:objects pot1 pot2 pot3 pot4 pot5 - pot ingredient1 ingredient2 ingredient3 ingredient4 ingredient5 ingredient6 - ingredient) (:init (arm-empty) (pot-empty pot1) (pot-empty pot2) (pot-empty pot3) (pot-empty pot4) (pot-empty pot5) ) (:goal (and (contain pot1 ingredient3) (contain pot1 ingredient4)
2308.13724#63
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
64
17 (contain pot1 ingredient5) (contain pot2 ingredient2) (contain pot2 ingredient3) (contain pot2 ingredient6) (contain pot3 ingredient2) (contain pot3 ingredient3) (contain pot3 ingredient5) (contain pot4 ingredient1) (contain pot4 ingredient6) (contain pot5 ingredient1) (contain pot5 ingredient3) (contain pot5 ingredient4) (contain pot5 ingredient5)) ) Ball Moving: For the Ball Moving planning domain, we use a total of three few-shot examples, which are given as follows. Example 1
2308.13724#64
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
65
Question: I have 3 balls within 4 rooms. Initially: Robot is in room2. Ball ball1 is in room3. Ball ball2 is in room2. Ball ball3 is in room4. Your goal is to move the balls to specific rooms: ball1 in room1, ball2 in room2, and ball3 in room3. Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room
2308.13724#65
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
66
?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem threeballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 - ball) (:init (arm-empty) (robot-at robot1 room2)
2308.13724#66
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
68
Question: I have 4 balls within 4 rooms. Initially: Robot is in room3. Ball ball1 is in room1. Ball ball2 is in room3. Ball ball3 is in room1. Ball ball4 is in room2. Your goal is to move the balls to specific rooms: ball1 in room3, ball2 in room2, ball3 in room4, and ball4 in room4. Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop
2308.13724#68
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
69
?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fourballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 - ball) (:init (arm-empty) (robot-at robot1 room3) (at ball1 room1) (at ball2 room3) (at ball3 room1) (at ball4 room2) ) (:goal (and
2308.13724#69
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
71
Question: I have 5 balls within 4 rooms. Initially: Robot is in room2. Ball ball1 is in room1. Ball ball2 is in room2. Ball ball3 is in room4. Ball ball4 is in room3. Ball ball5 is in room4. Your goal is to move the balls to specific rooms: ball1 in room1, ball2 in room1, ball3 in room4, ball4 in room2, and ball5 in room1. Answer: Domain file: (define (domain ballmoving) (:types room ball robot) (:predicates (robot-at ?r - robot ?x - room) (at ?b - ball ?x - room) (arm-empty) (carry ?r -robot ?b - ball) ) (:action move :parameters (?r - robot ?from ?to - room) :precondition (and (robot-at ?r ?from)) :effect (and (robot-at ?r ?to) (not (robot-at ?r ?from))) ) (:action pick :parameters (?r - robot ?b - ball ?room - room) :precondition (and (at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty))
2308.13724#71
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
72
(at ?b ?room) (robot-at ?r ?room) (arm-empty)) :effect (and (carry ?r ?b) (not (arm-empty)) (not (at ?b ?room)) ) (:action drop :parameters (?r - robot ?b -ball ?room -room) :precondition (and (carry ?r ?b) (robot-at ?r ?room)) :effect (and (arm-empty) (at ?b ?room) (not (carry ?r ?b))) ) Problem file: (define (problem fiveballs) (:domain ballmoving) (:objects robot1 - robot room1 room2 room3 room4 - room ball1 ball2 ball3 ball4 ball5 - ball) (:init (arm-empty) (robot-at robot1 room2) (at ball1 room1) (at ball2 room2) (at ball3 room4) (at ball4 room3) (at ball5 room4) ) (:goal (and (at ball1 room1) (at ball2 room1) (at ball3 room4)
2308.13724#72
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
74
Question: I have 3 blocks. Initially: Block b1 is on the table. Block b2 is on the table. Block b3 is on top of b1. Your goal is to move the blocks such that they are stacked in the order: b1 on b2, b2 on b3, and b3 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob)))
2308.13724#74
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
75
?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem threeblocks) (:domain blocksworld) (:objects b1 b2 b3) (:init (arm-empty) (on-table b1) (on-table b2) (on b3 b1) (clear b2) (clear b3)
2308.13724#75
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
77
Question: I have 4 blocks. Initially: Block b1 is on the table. Block b2 is on top of b4. Block b3 is on top of b1. Block b4 is on the table. Your goal is to move the blocks such that they are stacked in the order: b3 on b2, b2 on b1, b1 on b4, and b4 on table. Answer: Domain file: (define (domain blocksworld) (:predicates (clear ?x) (on ?x ?y) (on-table ?x) (arm-empty) (holding ?x) ) (:action pickup :parameters (?ob) :precondition (and (clear ?ob) (on-table ?ob) (arm-empty)) :effect (and (holding ?ob) (not (clear ?ob)) (not (on-table ?ob)) (not (arm-empty))) ) (:action putdown :parameters (?ob) :precondition (holding ?ob) :effect (and (clear ?ob) (arm-empty) (on-table ?ob) (not (holding ?ob))) ) (:action stack :parameters (?ob ?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob
2308.13724#77
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]
2308.13724
78
?underob) :precondition (and (clear ?underob) (holding ?ob)) :effect (and (arm-empty) (clear ?ob) (on ?ob ?underob) (not (clear ?underob)) (not (holding ?ob))) ) (:action unstack :parameters (?ob ?underob) :precondition (and (on ?ob ?underob) (clear ?ob) (arm-empty)) :effect (and (holding ?ob) (clear ?underob) (not (on ?ob ?underob)) (not (clear ?ob)) (not (arm-empty)))) ) Problem file: (define (problem fourblocks) (:domain blocksworld) (:objects b1 b2 b3 b4) (:init (arm-empty) (on-table b1) (on b2 b4) (on b3 b1) (on-table b4)
2308.13724#78
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the potential to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the planning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We examine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. Moreover, it also preserves the broad applicability and generalizability of working with natural language instructions.
http://arxiv.org/pdf/2308.13724
Zhehua Zhou, Jiayang Song, Kunpeng Yao, Zhan Shu, Lei Ma
cs.RO, cs.AI
null
null
cs.RO
20230826
20230826
[ { "id": "2211.09935" }, { "id": "2110.01517" }, { "id": "2204.00598" }, { "id": "1810.04805" }, { "id": "2304.11477" }, { "id": "2210.11610" }, { "id": "2108.07258" }, { "id": "2204.01691" }, { "id": "2305.11014" }, { "id": "2207.05608" }, { "id": "2303.06247" }, { "id": "2303.17651" }, { "id": "2303.12153" }, { "id": "2206.10498" }, { "id": "1909.01066" }, { "id": "2205.12689" } ]