id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.14963#4
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Typically, â encodersâ for generating the vector representations are implemented using transformers, which are usually fine-tuned in a supervised manner using a large dataset of relevant queryâ passage pairs (Karpukhin et al., 2020; Xiong et al., 2021). This formulation of search, in terms of comparisons between dense vectors, differs from â traditionalâ bag-of-words sparse representations that rely on inverted indexes for low-latency query evaluation. Instead, nearest neighbor search in vector space requires entirely different techniques: indexes based on hierarchical navigable small-world networks (HNSW) (Malkov and Yashunin, 2020) are commonly acknowledged as representing the state of the art. The Faiss library (Johnson et al., 2019) provides a popular implementation of HNSW indexes that is broadly adopted today and serves as a standard baseline. Despite conceptual similarities (Lin, 2021), it is clear that top-k retrieval on sparse vectors and dense vectors require quite different and distinct â software stacksâ . Since hybrid approaches that combine both dense and sparse representations have been shown to be more effective than either alone (Ma et al., 2022b; Lin and Lin, 2023), many modern systems combine separate retrieval components to achieve hybrid retrieval. For example, the Pyserini IR toolkit (Lin et al., 2021a) integrates Lucene and Faiss for sparse and dense retrieval, respectively. Recognizing the need for managing both sparse and dense retrieval models, the dominant narrative today is that the modern enterprise â AI stackâ requires a dedicated vector store or vector database, alongside existing fixtures such as relational databases, NoSQL stores, event stores, etc. A vector store would handle, for example, standard CRUD (create, read, update, delete) operations as well as nearest neighbor search. Many startups today are built on this premise; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few.
2308.14963#3
2308.14963#5
2308.14963
[ "2110.01529" ]
2308.14963#5
Vector Search with OpenAI Embeddings: Lucene Is All You Need
This is the narrative that our work challenges. 2 https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-passage-openai-ada2.md 2 â Documentsâ = Query S | [> Doc Encoder Query Encoder | | a | LQ Vv Vv [ee rp: Top-k Retrieval a â N ll Ranked List Figure 1: A standard bi-encoder architecture, where encoders generate dense vector representations (embeddings) from queries and documents (passages). Retrieval is framed as k-nearest neighbor search in vector space. Modern enterprise architectures are already exceedingly complex, and the addition of another software component (i.e., a distinct vector store) requires carefully weighing costs as well as benefits. The cost is obvious: increased complexity, not only from the introduction of a new component, but also from interactions with existing components. What about the benefits? While vector stores no doubt introduce new capabilities, the critical question is whether these capabilities can be provided via alternative means.
2308.14963#4
2308.14963#6
2308.14963
[ "2110.01529" ]
2308.14963#6
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Search is a brownfield application. Wikipedia defines this as â a term commonly used in the informa- tion technology industry to describe problem spaces needing the development and deployment of new software systems in the immediate presence of existing (legacy) software applications/systems.â Additionally, â this implies that any new software architecture must take into account and coexist with live software already in situ.â Specifically, many organizations have already made substantial investments in search within the Lucene ecosystem. While most organizations do not directly use the open-source Lucene search library in production, the search application landscape is dominated by platforms that are built on top of Lucene such as Elasticsearch, OpenSearch, and Solr. For example, Elastic, the publicly traded company behind Elasticsearch, reports approximately 20,000 subscrip- tions to its cloud service as of Q4 FY2023.3 Similarly, in the category of search engines, Lucene dominates DB-Engines Ranking, a site that tracks the popularity of various database management systems.4 Thereâ s a paucity of concrete usage data, but it would not be an exaggeration to say that Lucene has an immense install base. The most recent major release of Lucene (version 9), dating back to December 2021, includes HNSW indexing and search capabilities, which have steadily improved over the past couple of years. This means that differences in capabilities between Lucene and dedicated vector stores are primarily in terms of performance, not the availability of must-have features.
2308.14963#5
2308.14963#7
2308.14963
[ "2110.01529" ]
2308.14963#7
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Thus, from a simple costâ benefit calculus, it is not clear that vector search requires introducing a dedicated vector store into an already complex enterprise â AI stackâ . Our thesis: Lucene is all you need. We empirically demonstrate our claims on the MS MARCO passage ranking test collection, a standard benchmark dataset used by researchers today. We have encoded the entire corpus using OpenAIâ s ada2 embedding endpoint, and then indexed the dense vectors with Lucene. Experimental results show that this combination achieves effectiveness comparable to the state of the art on the development queries as well as queries from the TREC 2019 and 2020 Deep Learning Tracks.
2308.14963#6
2308.14963#8
2308.14963
[ "2110.01529" ]
2308.14963#8
Vector Search with OpenAI Embeddings: Lucene Is All You Need
3 4 # https://ir.elastic.co/news-events/press-releases/press-releases-details/2023/ Elastic-Reports-Fourth-Quarter-and-Fiscal-2023-Financial-Results/default.aspx https://db-engines.com/en/ranking/search+engine 3 Our experiments are conducted with Anserini (Yang et al., 2018), a Lucene-based IR toolkit that aims to support reproducible information retrieval research. By building on Lucene, Anserini aims to bridge the gap between academic information retrieval research and the practice of building real-world search applications. Devins et al. (2022) showed that capabilities implemented by researchers in Anserini using Lucene can be straightforwardly translated into Elasticsearch (or any other platform in the Lucene ecosystem), thus simplifying the path from prototypes to production deployments. Our demonstration further shows the ease with which state-of-the-art vector search can be imple- mented by simply â plugging togetherâ readily available components.
2308.14963#7
2308.14963#9
2308.14963
[ "2110.01529" ]
2308.14963#9
Vector Search with OpenAI Embeddings: Lucene Is All You Need
In the context of the bi-encoder architecture, Lin (2021) identified the logical scoring model and the physical retrieval model as distinct conceptual components. In our experiments, the logical scoring model maps to the OpenAI embedding APIâ whose operations are no different from any other API endpoint. What Lin calls the physical retrieval model focuses on the top-k retrieval capability, which is handled by Lucene. In Anserini, vector indexing and search is exposed in a manner that is analogous to indexing and retrieval using bag-of-words models such as BM25. Thus, the implementation of the state of the art in vector search using generative AI does not require any AI-specific implementations, which increases the accessibility of these technologies to a wider audience.
2308.14963#8
2308.14963#10
2308.14963
[ "2110.01529" ]
2308.14963#10
Vector Search with OpenAI Embeddings: Lucene Is All You Need
# 3 Experiments Experiments in this paper are relatively straightforward. We focused on the MS MARCO passage ranking test collection (Bajaj et al., 2018), which is built on a corpus comprising approximately 8.8 million passages extracted from the web. Note that since the embedding vectors are generated by OpenAIâ s API endpoint, no model training was performed. For evaluation, we used the standard development queries as well as queries from the TREC 2019 and TREC 2020 Deep Learning Tracks. In our experimental setup, we utilized the OpenAI ada2 model (Neelakantan et al., 2022) for generating both query and passage embeddings. This model is characterized by an input limit of 8191 tokens and an output embedding size of 1536 dimensions. However, to maintain consistency with the existing literature (Pradeep et al., 2021; Ma et al., 2022a), we truncated all passages in the corpus to 512 tokens. It is unknown whether OpenAI leveraged the MS MARCO passage corpus during model development, but in general, accounting for data leakage is extremely challenging for large models, especially those from OpenAI that lack transparency. Using tiktoken, OpenAIâ s official tokenizer, we computed the average token count per passage in our corpus to be 75.2, resulting in a total of approximately 660 million tokens. In order to generate the embeddings efficiently, we queried the API in parallel while respecting the rate limit of 3500 calls per minute. We had to incorporate logic for error handling in our code, given the high-volume nature of our API calls. Ultimately, we were able to encode both the corpus and the queries, the latter of which are negligible in comparison, in a span of two days. As previously mentioned, all our retrieval experiments were conducted with the Anserini IR toolkit (Yang et al., 2018). The primary advantage of Anserini is that it provides direct access to underlying Lucene features in a â researcher-friendlyâ manner that better comports with modern evaluation workflows. Our experiments were based on Lucene 9.5.0, but indexing was a bit tricky because the HNSW implementation in Lucene restricts vectors to 1024 dimensions, which was not sufficient for OpenAIâ
2308.14963#9
2308.14963#11
2308.14963
[ "2110.01529" ]
2308.14963#11
Vector Search with OpenAI Embeddings: Lucene Is All You Need
s 1536-dimensional embeddings.5 Although the resolution of this issue, which is to make vector dimensions configurable on a per codec basis, has been merged to the Lucene source trunk,6 this feature has not been folded into a Lucene release (yet) as of early August 2023. Thus, there is no public release of Lucene that can directly index OpenAIâ s ada2 embedding vectors. Fortunately, we were able to hack around this limitation in an incredibly janky way.7
2308.14963#10
2308.14963#12
2308.14963
[ "2110.01529" ]
2308.14963#12
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Experimental results are shown in Table 1, where we report effectiveness in terms of standard metrics: reciprocal rank at 10 (RR@10), average precision (AP), nDCG at a rank cutoff of 10 (nDCG@10), and recall at a rank cutoff of 1000 (R@1k). The effectiveness of the ada2 embeddings is shown in the 5 https://github.com/apache/lucene/issues/11507 6 https://github.com/apache/lucene/pull/12436 7The sketch of the solution is as follows: We copy relevant source files from the Lucene source trunk directly into our source tree and patch the vector size settings directly. When we build our fatjar, the class files of our â local versionsâ take precedence, and hence override the vector size limitations. 4 dev DL19 DL20 RR@10 R@1k AP nDCG@10 R@1k AP nDCG@10 R@1k Unsupervised Sparse Representations BM25 (Ma et al., 2022a)â BM25+RM3 (Ma et al., 2022a)â Learned Sparse Representations uniCOIL (Ma et al., 2022a)â SPLADE++ ED (Formal et al., 2022)â Learned Dense Representations TAS-B (Hofstätter et al., 2021) TCT-ColBERTv2 (Lin et al., 2021b)â ColBERT-v2 (Santhanam et al., 2022) Aggretriever (Lin et al., 2023)â
2308.14963#11
2308.14963#13
2308.14963
[ "2110.01529" ]
2308.14963#13
Vector Search with OpenAI Embeddings: Lucene Is All You Need
0.184 0.157 0.352 0.383 0.340 0.358 0.397 0.362 0.853 0.301 0.861 0.342 0.958 0.461 0.983 0.505 0.975 0.970 0.447 0.984 0.974 0.435 - - 0.506 0.522 0.702 0.731 0.712 0.720 - 0.684 0.750 0.286 0.814 0.301 0.829 0.443 0.873 0.500 0.845 0.826 0.475 - - - 0.808 0.471 0.480 0.490 0.675 0.720 0.693 0.688 - 0.697 OpenAI ada2 0.343 0.984 0.479 0.704 0.863 0.477 0.676 0.786 0.824 0.843 0.900 0.865 0.843 - 0.856 0.871 Table 1: Effectiveness of OpenAI ada2 embeddings on the MS MARCO development set queries (dev) and queries from the TREC 2019/2020 Deep Learning Tracks (DL19/DL20), compared to a selection of other models. â indicates results from Pyseriniâ s two-click reproductions (Lin, 2022) available at https://castorini.github.io/pyserini/2cr/msmarco-v1-passage.html, which may differ slightly from the original papers. All other results are copied from their original papers. last row of the table. Note that due to the non-deterministic nature of HNSW indexing, effectiveness figures may vary slightly from run to run. For comparison, we present results from a few select points of reference, classified according to the taxonomy proposed by Lin (2021); OpenAIâ s embedding models belong in the class of learned dense representations.
2308.14963#12
2308.14963#14
2308.14963
[ "2110.01529" ]
2308.14963#14
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Notable omissions in the results table include the following: the original OpenAI paper that describes the embedding model (Neelakantan et al., 2022) does not report comparable results; neither does Izacard et al. (2021) for Contriever, another popular learned dense representation model. Recently, Kamalloo et al. (2023) also evaluated OpenAIâ s ada2 embeddings, but they did not examine any of the test collections we do here. Looking at the results table, our main point is that we can achieve effectiveness comparable to the state of the art using a production-grade, completely off-the-shelf embedding API coupled with Lucene for indexing and retrieval. To complete our experimental results, we provide performance figures on a server with two Intel Xeon Platinum 8160 processors (33M Cache, 2.10 GHz, 24 cores each) with 1 TB RAM, running Ubuntu 18.04 with ZFS. This particular processor was launched in Q3 of 2017 and is no longer commercially available; we can characterize this server as â high endâ , but dated.
2308.14963#13
2308.14963#15
2308.14963
[ "2110.01529" ]
2308.14963#15
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Indexing took around three hours with 16 threads, with the parameters M set to 16 and efC set to 100, without final segment optimization. Using 32-bit floats, the raw 1536-dimensional vectors should consume 54 GB on disk, but for convenience we used an inefficient JSON text-based representation. Therefore, our collection of vectors takes up 109 GB as compressed text files (using gzip). For vector search, using 16 threads, we were able to achieve 9.8 queries per second (QPS), fetching 1000 hits per query with the efSearch parameter set to 1000. These results were obtained on the MS MARCO development queries, averaged over four separate trials after a warmup run. # 4 Discussion Our demonstration shows that it is possible today to build a vector search prototype using OpenAI embeddings directly with Lucene. Nevertheless, there are a number of issues worth discussing, which we cover below.
2308.14963#14
2308.14963#16
2308.14963
[ "2110.01529" ]
2308.14963#16
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Jank. We concede that getting our demonstration to work required a bit of janky implementation tricks. Even though all the required features have been merged to Luceneâ s source trunk, no official release has been cut that incorporates all the patches (at least at the time we performed our experiments in early August, 2023). Quite simply, the complete feature set necessary for production deployment is not, as they say, ready for prime time. However, to use another cliché, this is a small matter of programming (SMOP). We see no major roadblocks in the near future: the next official release of 5 Lucene will incorporate the necessary features, and after that, all downstream consumers will begin to incorporate the capabilities that we demonstrate here. Nevertheless, Lucene has been a relative laggard in dense retrieval. Despite this, we believe that recent developments point to substantial and sustained investments in the Lucene ecosystem moving forward. For example, in its Q4 FY 2023 report, Elastic announced the Elasticsearch Relevance Engine, â powered by built-in vector search and transformer models, designed specifically to bring the power of AI innovation to proprietary enterprise data.â A recent blog post8 from Amazon Web Services explained vector database capabilities in OpenSearch, providing many details and reference architectures. These are just two examples of commitments that help bolster the case for Lucene that we have articulated here. Overall, we are optimistic about the future of the ecosystem. Performance. Lucene still lags alternatives in terms of indexing speed, query latency and through- put, and related metrics. For example, Ma et al. (2023) recently benchmarked Lucene 9.5.0 against Faiss (Johnson et al., 2019). Experiments suggest that Lucene achieves only around half the query throughput of Faiss under comparable settings, but appears to scale better when using multiple threads. Although these results only capture a snapshot in time, it would be fair to characterize Lucene as unequivocally slower. However, Faiss is relatively mature and hence its headroom for performance improvements is rather limited. In contrast, we see many more opportunities for gains in Lucene. Coupled with signs of strong commitment (discussed above), we believe that the performance gap between Lucene and dedicated vector stores will decrease over time. Alternatives. We acknowledge a number of competing alternatives that deserve consideration. Note that the core argument we forward is about costâ benefit tradeoffs:
2308.14963#15
2308.14963#17
2308.14963
[ "2110.01529" ]
2308.14963#17
Vector Search with OpenAI Embeddings: Lucene Is All You Need
In our view, it is not clear that the benefits offered by a dedicated vector store outweigh the increased architectural complexity of introducing a new software component within an enterprise. From this perspective, we can identify two potentially appealing alternatives: â ¢ Fully managed services. One simple way to reduce architectural complexity is to make it someone elseâ s problem. Vespa9 is perhaps the best example of this solution, providing both dense retrieval and sparse retrieval capabilities in a fully managed environment, eliminating the need for users to explicitly worry about implementation details involving inverted indexes, HNSW indexes, etc. Vepsa provides a query language that supports a combination of vector search, full-text search, as well as search over structured data. Our main question here concerns traction and adoption: as a brownfield application, weâ re not convinced that enterprises will make the (single, large) leap from an existing solution to a fully managed service.
2308.14963#16
2308.14963#18
2308.14963
[ "2110.01529" ]
2308.14963#18
Vector Search with OpenAI Embeddings: Lucene Is All You Need
â ¢ Vector search capabilities in relational databases. In the same way that vector search grows naturally out of an already deployed and mature text search platform (e.g., Elasticsearch), we can see similar arguments being made from the perspective of relational databases. Despite numerous attempts (spanning decades) at toppling its lofty perch (Stonebraker and Hellerstein, 2005; Pavlo et al., 2009), relational databases remain a permanent fixture in enterprise â data stacksâ . This means that by building vector search capabilities into relational databases, enterprises gain entrée into the world of dense retrieval (essentially) for free. A great example of this approach is pgvector,10 which provides open-source vector similarity search for Postgres. We find the case compelling: if your enterprise is already running Postgres, pgvector adds vector search capabilities with minimal additional complexity.
2308.14963#17
2308.14963#19
2308.14963
[ "2110.01529" ]
2308.14963#19
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Itâ s basically a free lunch. # 5 Conclusions There is no doubt that manipulation of dense vectors forms an important component of search today. The central debate we tackle is how these capabilities should be implemented and deployed in production systems. The dominant narrative is that you need a new, distinct addition to your enterprise â AI stackâ â a vector store. The alternative we propose is to say: If youâ ve built search applications already, chances are youâ re already invested in the Lucene ecosystem. In this case, Lucene is all you need. Of course, time will tell whoâ s right. 8
2308.14963#18
2308.14963#20
2308.14963
[ "2110.01529" ]
2308.14963#20
Vector Search with OpenAI Embeddings: Lucene Is All You Need
# https://aws.amazon.com/blogs/big-data/amazon-opensearch-services-vector-database-capabilities-explained/ https://vespa.ai/ https://github.com/pgvector/pgvector 10 6 # Acknowledgements This research was supported in part by the Natural Sciences and Engineering Research Council (NSERC) of Canada. Weâ d like to thank Josh McGrath and the team at Distyl for providing support to access OpenAI APIs. # References Akari Asai, Sewon Min, Zexuan Zhong, and Danqi Chen. 2023. Retrieval-based Language Models and Applications. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 6: Tutorial Abstracts).
2308.14963#19
2308.14963#21
2308.14963
[ "2110.01529" ]
2308.14963#21
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Toronto, Canada, 41â 46. Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Ma- jumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268v3 (2018). Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2020. Overview of the TREC 2020 Deep Learning Track. In Proceedings of the Twenty-Ninth Text REtrieval Conference Proceedings (TREC 2020).
2308.14963#20
2308.14963#22
2308.14963
[ "2110.01529" ]
2308.14963#22
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Gaithersburg, Maryland. Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M. Voorhees. 2019. Overview of the TREC 2019 Deep Learning Track. In Proceedings of the Twenty-Eighth Text REtrieval Conference Proceedings (TREC 2019). Gaithersburg, Maryland. Josh Devins, Julie Tibshirani, and Jimmy Lin. 2022. Aligning the Research and Practice of Building Search Applications: Elasticsearch and Pyserini. In Proceedings of the 15th ACM International Conference on Web Search and Data Mining (WSDM 2022). 1573â
2308.14963#21
2308.14963#23
2308.14963
[ "2110.01529" ]
2308.14963#23
Vector Search with OpenAI Embeddings: Lucene Is All You Need
1576. Thibault Formal, Carlos Lassance, Benjamin Piwowarski, and Stéphane Clinchant. 2022. From Distillation to Hard Negative Sampling: Making Sparse Neural IR Models More Effective. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022). Madrid, Spain, 2353â 2359. Sebastian Hofstätter, Sheng-Chieh Lin, Jheng-Hong Yang, Jimmy Lin, and Allan Hanbury. 2021.
2308.14963#22
2308.14963#24
2308.14963
[ "2110.01529" ]
2308.14963#24
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Efficiently Teaching an Effective Dense Retriever with Balanced Topic Aware Sampling. In Pro- ceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 113â 122. Gautier Izacard, Mathilde Caron, Lucas Hosseini, Sebastian Riedel, Piotr Bojanowski, Armand Joulin, and Edouard Grave. 2021. Towards Unsupervised Dense Information Retrieval with Contrastive Learning. arXiv:2112.09118 (2021). Jeff Johnson, Matthijs Douze, and Hervé Jégou. 2019. Billion-scale similarity search with GPUs. IEEE Transactions on Big Data 7, 3 (2019), 535â
2308.14963#23
2308.14963#25
2308.14963
[ "2110.01529" ]
2308.14963#25
Vector Search with OpenAI Embeddings: Lucene Is All You Need
547. Ehsan Kamalloo, Xinyu Zhang, Odunayo Ogundepo, Nandan Thakur, David Alfonso-hermelo, Mehdi Rezagholizadeh, and Jimmy Lin. 2023. Evaluating Embedding APIs for Information Retrieval. In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 5: Industry Track). Toronto, Canada, 518â 526. Vladimir Karpukhin, Barlas Oguz, Sewon Min, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen, and Wen-tau Yih. 2020. Dense Passage Retrieval for Open-Domain Question Answering. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP).
2308.14963#24
2308.14963#26
2308.14963
[ "2110.01529" ]
2308.14963#26
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Online, 6769â 6781. Jimmy Lin. 2021. A Proposed Conceptual Framework for a Representational Approach to Information Retrieval. arXiv:2110.01529 (2021). Jimmy Lin. 2022. Building a Culture of Reproducibility in Academic Research. arXiv:2212.13534 (2022). 7 Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021a.
2308.14963#25
2308.14963#27
2308.14963
[ "2110.01529" ]
2308.14963#27
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Pyserini: A Python Toolkit for Reproducible Information Retrieval Research with Sparse and Dense Representations. In Proceedings of the 44th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2021). 2356â 2362. Sheng-Chieh Lin, Minghan Li, and Jimmy Lin. 2023. Aggretriever: A Simple Approach to Aggregate Textual Representations for Robust Dense Passage Retrieval. Transactions of the Association for Computational Linguistics 11 (2023), 436â 452. Sheng-Chieh Lin and Jimmy Lin. 2023. A Dense Representation Framework for Lexical and Semantic Matching. ACM Transactions on Information Systems 41 (2023), Article No. 110. Issue 4. Sheng-Chieh Lin, Jheng-Hong Yang, and Jimmy Lin. 2021b. In-Batch Negatives for Knowledge Distillation with Tightly-Coupled Teachers for Dense Retrieval. In Proceedings of the 6th Workshop on Representation Learning for NLP (RepL4NLP-2021). 163â 173. Xueguang Ma, Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2022a. Document Expansions and Learned Sparse Lexical Representations for MS MARCO V1 and V2. In Proceedings of the 45th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2022).
2308.14963#26
2308.14963#28
2308.14963
[ "2110.01529" ]
2308.14963#28
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Madrid, Spain, 3187â 3197. Xueguang Ma, Kai Sun, Ronak Pradeep, Minghan Li, and Jimmy Lin. 2022b. Another Look at DPR: Reproduction of Training and Replication of Retrieval. In Proceedings of the 44th European Conference on Information Retrieval (ECIR 2022), Part I. Stavanger, Norway, 613â 626. Xueguang Ma, Tommaso Teofili, and Jimmy Lin. 2023. Anserini Gets Dense Retrieval: Integration of Luceneâ s HNSW Indexes. In Proceedings of the 32nd International Conference on Information and Knowledge Management (CIKM 2023).
2308.14963#27
2308.14963#29
2308.14963
[ "2110.01529" ]
2308.14963#29
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Birmingham, the United Kingdom. Yu A. Malkov and D. A. Yashunin. 2020. Efficient and Robust Approximate Nearest Neighbor Search Using Hierarchical Navigable Small World Graphs. Transactions on Pattern Analysis and Machine Intelligence 42, 4 (2020), 824â 836. Grégoire Mialon, Roberto Dessì, Maria Lomeli, Christoforos Nalmpantis, Ram Pasunuru, Roberta Raileanu, Baptiste Rozière, Timo Schick, Jane Dwivedi-Yu, Asli Celikyilmaz, Edouard Grave, Yann LeCun, and Thomas Scialom. 2023. Augmented Language Models: a Survey. arXiv:2302.07842 (2023). Arvind Neelakantan, Tao Xu, Raul Puri, Alec Radford, Jesse Michael Han, Jerry Tworek, Qiming Yuan, Nikolas Tezak, Jong Wook Kim, Chris Hallacy, Johannes Heidecke, Pranav Shyam, Boris Power, Tyna Eloundou Nekoul, Girish Sastry, Gretchen Krueger, David Schnurr, Felipe Petroski Such, Kenny Hsu, Madeleine Thompson, Tabarak Khan, Toki Sherbakov, Joanne Jang, Peter Welinder, and Lilian Weng. 2022. Text and Code Embeddings by Contrastive Pre-Training. arXiv:2201.10005 (2022). Andrew Pavlo, Erik Paulson, Alexander Rasin, Daniel J. Abadi, David J. DeWitt, Samuel Madden, and Michael Stonebraker. 2009. A Comparison of Approaches to Large-Scale Data Analysis. In Proceedings of the 35th ACM SIGMOD International Conference on Management of Data.
2308.14963#28
2308.14963#30
2308.14963
[ "2110.01529" ]
2308.14963#30
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Providence, Rhode Island, 165â 178. Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The Expando-Mono-Duo Design Pattern for Text Ranking with Pretrained Sequence-to-Sequence Models. arXiv:2101.05667 (2021). Keshav Santhanam, Omar Khattab, Jon Saad-Falcon, Christopher Potts, and Matei Zaharia. 2022. ColBERTv2: Effective and Efficient Retrieval via Lightweight Late Interaction. In Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies.
2308.14963#29
2308.14963#31
2308.14963
[ "2110.01529" ]
2308.14963#31
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Seattle, United States, 3715â 3734. Michael Stonebraker and Joseph M. Hellerstein. 2005. What Goes Around Comes Around. 8 Lee Xiong, Chenyan Xiong, Ye Li, Kwok-Fung Tang, Jialin Liu, Paul N. Bennett, Junaid Ahmed, and Arnold Overwijk. 2021. Approximate Nearest Neighbor Negative Contrastive Learning for Dense Text Retrieval. In Proceedings of the 9th International Conference on Learning Representations (ICLR 2021). Peilin Yang, Hui Fang, and Jimmy Lin. 2018.
2308.14963#30
2308.14963#32
2308.14963
[ "2110.01529" ]
2308.14963#32
Vector Search with OpenAI Embeddings: Lucene Is All You Need
Anserini: Reproducible Ranking Baselines Using Lucene. Journal of Data and Information Quality 10, 4 (2018), Article 16. 9
2308.14963#31
2308.14963
[ "2110.01529" ]
2308.14296#0
RecMind: Large Language Model Powered Agent For Recommendation
3 2 0 2 g u A 8 2 ] R I . s c [ 1 v 6 9 2 4 1 . 8 0 3 2 : v i X r a # RecMind: Large Language Model Powered Agent For Recommendation Yancheng Wang1, Ziyan Jiang2*, Zheng Chen2*, Fan Yang2*, Yingxue Zhou2*, Eunah Cho2, Xing Fan2, Xiaojiang Huang2, Yanbin Lu2, Yingzhen Yang1 1School of Computing and Augmented Intelligence, Arizona State University 2Amazon Alexa AI {yancheng.wang, yingzhen.yang}@asu.edu {ziyjiang, zgchen, ffanyang, zyingxue, eunahch, fanxing, xjhuang, luyanbin}@amazon.com # Abstract Recent advancements in instructing Large Language Mod- els (LLMs) to utilize external tools and execute multi-step plans have significantly enhanced their ability to solve in- tricate tasks, ranging from mathematical problems to cre- ative writing. Yet, there remains a notable gap in studying the capacity of LLMs in responding to personalized queries such as a recommendation request. To bridge this gap, we have designed an LLM-powered autonomous recommender agent RecMind, which is capable of providing precise per- sonalized recommendations through careful planning, utiliz- ing tools for obtaining external knowledge, and leveraging in- dividual data. We propose a novel algorithm, Self-Inspiring, to improve the planning ability of the LLM agent. At each intermediate planning step, the LLM â self-inspiresâ to con- sider all previously explored states to plan for next step. This mechanism greatly improves the modelâ s ability to com- prehend and utilize historical planning information for rec- ommendation. We evaluate RecMindâ s performance in vari- ous recommendation scenarios, including rating prediction, sequential recommendation, direct recommendation, expla- nation generation, and review summarization. Our experi- ment shows that RecMind outperforms existing zero/few- shot LLM-based recommendation methods in different rec- ommendation tasks and achieves competitive performance to a recent model P5, which requires fully pre-train for the rec- ommendation tasks.
2308.14296#1
2308.14296
[ "2302.13971" ]
2308.14296#1
RecMind: Large Language Model Powered Agent For Recommendation
# 1 Introduction A Recommender System (RS) plays a key role in search en- gines, e-commerce websites, social media, video and music streaming services, and various other Internet platforms. An RS analyzes the historical interactions between users and items to recommend items that users may interact with in the future (Koren, Bell, and Volinsky 2009b; Linden, Smith, and York 2003). The Modern RS has been enhanced by Deep Neural Networks (DNNs) to more effectively learn the rep- resentations of users, items, and sequential behaviors (Hi- dasi et al. 2015; He et al. 2020; Sun et al. 2019). However, most existing RSs such as DNN-based methods (e.g., CNN and LSTM) and pre-trained language models (e.g., BERT) cannot sufficiently capture textual knowledge about users and items due to limitations in model scale and data size. Besides, most existing RS methods have been designed for specific tasks and are inadequate in generalizing to unseen recommendation tasks (Fan et al. 2023). Recent advances in Large Language Models (LLMs), such as GPT-3 (Brown et al. 2020), GPT-4 (OpenAI 2023), LLaMA (Touvron et al. 2023a), LLaMa-2 (Touvron et al. 2023b), and PaLM-2 (Anil et al. 2023) have demonstrated remarkable results in a wide range of tasks, which have mo- tivated the research of leveraging LLMs for recommenda- tion to mitigate the aforementioned challenges (Liu et al. 2023; Fan et al. 2023; Lin et al. 2023). However, exist- ing studies primarily rely on knowledge stored within the modelâ s weights, neglecting the potential benefits of lever- aging external tools to access real-time information and domain-specific knowledge (Yang et al. 2023; Bao et al. 2023). Furthermore, the reasoning ability of LLMs for rec- ommendation tasks is not fully utilized in current research, resulting in suboptimal predictions due to the intricate nature of recommendation-related tasks (Liu et al. 2023).
2308.14296#0
2308.14296#2
2308.14296
[ "2302.13971" ]
2308.14296#2
RecMind: Large Language Model Powered Agent For Recommendation
To better utilize the strong reasoning and tool-using abili- ties of LLMs, we design a recommendation agent RecMind that leverages an LLM-powered API as its intellectual core and incorporates a few key components. The first key com- ponent is Planning which enables the agent to break com- plex recommendation tasks into manageable steps for ef- ficient handling of complex situations. Each step of plan- ning involves thought, action and observation (see Figure 1 for examples and Section 3 for details). The agent is also equipped with Memory consisting of Personalized Mem- ory and World Knowledge, each accessible through specific tools. The Tools enhance the agentâ s functionality on top of the LLM, such as retrieving relevant knowledge, or assisting with the reasoning process. To further enhance the planning ability of the agent, we propose a new planning algorithm Self-Inspiring (SI). At each intermediate planning step, the agent â self-inspiresâ to consider all previously explored planning paths to explore the next planning states. Unlike existing Chain-of-Thoughts (CoT) (Wei et al. 2022) and Tree-of-Thoughts (ToT) (Yao et al. 2023) which discards states (thoughts) in previously explored paths when generating a new state, SI retains all previous states from all history paths when generating new state. SI is inspired by the intuition that all history states can
2308.14296#1
2308.14296#3
2308.14296
[ "2302.13971" ]
2308.14296#3
RecMind: Large Language Model Powered Agent For Recommendation
*These authors contributed equally. Thought 1: I can retrieve products previously rated by customer A3UXLMTIUSPNQ4, along with their respective ratings, from the database. Action 1: SQLTool [Find the products previously rated by customer A3UXLMTIUSPNQ4.] Observation 1: HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack; Pink Heart Beauty Bust Styling Jell 80ml; Gelish Hard Gel - LED Cover Pink Builder 0.50z; Tammy Taylor Nail Powder 5 Oz. â Thought 2: | can search for the product category of â
2308.14296#2
2308.14296#4
2308.14296
[ "2302.13971" ]
2308.14296#4
RecMind: Large Language Model Powered Agent For Recommendation
Sewak Al-Falahâ Action 2: Search [What is the product category of â Sewak Al-Falahâ ] Observation 2: Sewak Al-Falah is a traditional natural toothbrush. (Thought 3 (1): Among the products customer A3UXLMTIUSPNQ4 rated before, HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack is a also a toothbrush. I \can retrieve the rating of HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack [by customer A3UXLMTIUSPNQ4. |Action 3 (1): SQLTool [What is the rating of â HERBOGANIC Sewak Natural Miswak {Traditional Toothbrush 12 Packâ by customer A3UXLMTIUSPNQ4?.]. Observation 3 (1): Customer A3UXLMTIUSPNQ4 rated â
2308.14296#3
2308.14296#5
2308.14296
[ "2302.13971" ]
2308.14296#5
RecMind: Large Language Model Powered Agent For Recommendation
HERBOGANIC Sewak Natural [Miswak Traditional Toothbrush 12 Packâ as 5. (Thought 3 (2): I can search for the average rating of "Sewak Al-Falahâ in the database. [Action 3 (2): SQLTool [What is the average rating of product "Sewak Al-Falah"?.]. Observation 3 (2): The average rating of "Sewak Al-Falah" is 4.2 ( â Ew ates ) â
2308.14296#4
2308.14296#6
2308.14296
[ "2302.13971" ]
2308.14296#6
RecMind: Large Language Model Powered Agent For Recommendation
Thought 1: I can retrieve products previously rated by customer A3UXLMTIUSPNQ4, along with their respective ratings, from the database. Action 1: SQLTool [Find the products previously rated by customer A3UXLMTIUSPNQ4.] Observation 1: HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack; Pink Heart Beauty Bust Styling Jell 80ml; Gelish Hard Gel - LED Cover Pink Builder 0.50z; Tammy Taylor Nail Powder 5 Oz. â Thought 2: | can search for the product category of â
2308.14296#5
2308.14296#7
2308.14296
[ "2302.13971" ]
2308.14296#7
RecMind: Large Language Model Powered Agent For Recommendation
Sewak Al-Falahâ Action 2: Search [What is the product category of â Sewak Al-Falahâ ] Observation 2: Sewak Al-Falah is a traditional natural toothbrush. â Thought 3 (1): Among the products customer A3UXLMTIUSPNQ4 rated before, HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack is a also a toothbrush. I can retrieve the rating of HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Pack by customer A3UXLMTIUSPNQ4.
2308.14296#6
2308.14296#8
2308.14296
[ "2302.13971" ]
2308.14296#8
RecMind: Large Language Model Powered Agent For Recommendation
Action 3 (1): SQLTool [What is the rating of â HERBOGANIC Sewak Natural Miswak â Traditional Toothbrush 12 Packâ by customer AS3UXLMTIUSPNQ4?,]. Observation 3 (1): Customer A3UXLMTIUSPNQ4 rated â HERBOGANIC Sewak Natural Miswak Traditional Toothbrush 12 Packâ as 5 â Thought 3 (2): In addition to search for the rating of a similar product, I can also search for the average rating of "Sewak Al-Falahâ
2308.14296#7
2308.14296#9
2308.14296
[ "2302.13971" ]
2308.14296#9
RecMind: Large Language Model Powered Agent For Recommendation
in the database. Action 3 (2): SQLTool [What is the average rating of product "Sewak Al-Falah"?.] Observation 3 (2): The average rating of "Sewak Al-Falah" is 4.2 â Thought 4: Now I know the answer. Since the average rating Of "Sewak Al-Falah" is 4.2, I can round 4.2 to the closest integer, which is 4. Action 4: Finish [Customer A3UXLMTIUSPNQ4 will rate the product "Sewak Al-Falah" as 4] Observation 4: 4 x X / â Thought 4:
2308.14296#8
2308.14296#10
2308.14296
[ "2302.13971" ]
2308.14296#10
RecMind: Large Language Model Powered Agent For Recommendation
Now I know the answer. The rating mer ASUXLMTIUSPNQ4 gives to the product "Sewak Al-Falah" can be SSiiGWHGRSBEMECHISIMAGA. | can take the average of 5 and 4.2, and round it to the closest integer, which Action 4: Finish [Customer AS3UXLMTIUSPNQ4 will rate the product "Sewak Al-Falah" as 5] Observation 4: 5 Jv Figure 1: Comparisons of rating prediction results by RecMind-ToT (left) and RecMind-SI (right). In the RecMind-ToT, after searching for the product category of the item in Step 2, the RecMind agent first generates thought 3 (1) to retrieve the rating of a similar item. After being evaluated by the voting-based evaluator, the RecMind agent prunes the option 3 (1) and proposes another thought 3 (2) to retrieve the average rating of the item and then makes the prediction solely based on it. In contrast, although RecMind-SI proposed the same alternative options in step 3, it takes into account the thought, action, and observation from both options 3 (1) and 3 (2) to generate the thought for the next step. provide useful information for the agent to generate better planning. Figure 1 provides an example of the planning via ToT and SI and shows that SI planning achieves a more ac- curate rating than ToT due to better planning of SI. To the best of our knowledge, this is the first public re- search work on an LLM-powered autonomous agent for rec- ommendation. The main contributions of our work are: â ¢ We introduce RecMind, a novel autonomous agent framework that synergizes reasoning, acting, and mem- ory for multiple recommendation-related tasks.
2308.14296#9
2308.14296#11
2308.14296
[ "2302.13971" ]
2308.14296#11
RecMind: Large Language Model Powered Agent For Recommendation
â ¢ We propose a self-inspiring planning technique, which generates better planning by integrating multiple rea- soning paths than currently popular methods Chain-Of- Thoughts and Tree-Of-Thoughts. â ¢ We evaluate the recommendation effectiveness of Rec- Mind across five distinct recommendation scenarios (rat- ing prediction, sequential recommendation, direct rec- ommendation, explanation generation, and review sum- marization). Extensive experiments and analyses on var- ious datasets demonstrate that RecMind outperforms the state-of-the-art (SOTA) zero/few-shot LLM-based base- lines and achieves competitive performance with a fully pre-trained expert recommendation model P5 (Geng et al. 2022). agents are designed to perform tasks autonomously towards a specific goal, rather than merely responding to queries from human users. The central concept is to leverage LLMs to produce text-based outputs and actions that can then be used for making API calls and performing operations within a specific environment. LLMs, with their strong reasoning abilities, can decompose challenging and complex tasks into smaller, more manageable steps (Wei et al. 2022; Yao et al. 2023). Furthermore, by enabling LLMs to utilize tools, we can enhance their capacity to tap into a much broader and dynamic knowledge space (Patil et al. 2023). A number of successful applications have emerged, including ReAct (Yao et al. 2022), Toolformer (Schick et al. 2023), Hugging- GPT (Shen et al. 2023), generative agents (Park et al. 2023), WebGPT (Nakano et al. 2021), AutoGPT (Gravitas 2023), BabyAGI (Nakajima 2023), and Langchain (Chase 2023). LLM for Recommendation Recently, LLMs have gained popularity in recommender systems, given their ability to understand and summarize a userâ s preferences or past in- teractions in natural language (Fan et al. 2023; Lin et al. 2023).
2308.14296#10
2308.14296#12
2308.14296
[ "2302.13971" ]
2308.14296#12
RecMind: Large Language Model Powered Agent For Recommendation
Current LLM-based recommender systems are pri- marily designed for rating prediction (Kang et al. 2023; Bao et al. 2023) and sequential recommendation tasks (Wang and Lim 2023; Yang et al. 2023; Hou et al. 2023). In both tasks, a userâ s previous interactions with items, along with other optional data like the user profile or item attributes, are con- catenated to formulate a natural language prompt. This is then fed into an LLM with options for no fine-tuning (Wang 2 Related Work LLM-as-Agent There is an emerging trend where LLMs are augmented to become autonomous language agents. These
2308.14296#11
2308.14296#13
2308.14296
[ "2302.13971" ]
2308.14296#13
RecMind: Large Language Model Powered Agent For Recommendation
Rating Prediction How will user_X rate the item "Kusco-Murphy Tart Hair"? The rating should be an integer between 1 to 5, with 1 being lowest and 5 being highest. Direct Recommendation From the item candidates listed below, choose the top 10 items to recommend to user_X and rank them in order of priority from highest to lowest. Candidates: [*Rogaine Women Hair Regrowth Treatmentâ , ...... ] Sequential Recommendation user_X has interacted with the following items in chronological order: ["Old Spice Body Wash Red Zoneâ , ......] Please recommend the next item that the user might interact with. Choose the top 10 products to recommend in order of priority, from highest to lowest. Review Summarization Write a review title to summarize the review from user_X to item "Chrome Razor and Shaving Brush Stand". The review is "The stand is more solid then I expected for the price. The shape of this stand allows me to hang the shaving brush over the soap bowl, I couldn't do that with stand I had gotten with the kit." Explanation Generation Help user_X to generate a 5-star explanation for item "FoliGrowth Hair Growth Supplementâ
2308.14296#12
2308.14296#14
2308.14296
[ "2302.13971" ]
2308.14296#14
RecMind: Large Language Model Powered Agent For Recommendation
. RecMind Expert Models g SQL Tool = sol Search Tool ae HairGenicsâ , [Propidren by â Nutrafol Women's Balance Hair Growth Supplements, Ages 45 and Upâ , eed [â Old Spice Hair Styling Pomade for Menâ , â Lume Whole Body Deodorant - Invisible Cream Stick - 72 Hour Odor Control â , ......] Great quality for good price. This product is essential for growing and maintaining healthy hair! This is a product to be bought in bulk because you can never have enough of it.
2308.14296#13
2308.14296#15
2308.14296
[ "2302.13971" ]
2308.14296#15
RecMind: Large Language Model Powered Agent For Recommendation
Figure 2: Here is an overview of our proposed RecMind architecture. It comprises four major components: â RecMindâ is built based on ChatGPT API, â Toolsâ support various API call to retrieve knowledge from â Memoryâ component, â Planningâ component is in charge of thoughts generation. and Lim 2023), full-model fine-tuning (Yang et al. 2023), or parameter-efficient fine-tuning (Bao et al. 2023). In the sequential recommendation task, to reduce the search space and better tailor it to each dataset, an optional pre-filtered set of item candidates is included in the input prompts. This en- sures the model generates the final ranked list based on that specific set. Liu et al. (2023) designs a series of prompts to evaluate ChatGPTâ s performance over five recommendation tasks. This study highlights the notable generalization capa- bilities of LLMs, largely attributed to their strong in-context learning abilities (Wei et al. 2021). Unlike existing studies, our study pioneers the creation of a recommendation-focused LLM agent that harnesses the LLMâ s capabilities in reasoning, tool usage, and action. This approach enhances the effectiveness of recommender sys- tems, also making them more generalizable across multiple recommendation related tasks. Planning Planning helps LLM Agents decompose tasks into smaller, manageable subgoals for efficiently handling complex tasks. Consider the setting where the goal is to gen- erate the final result y given problem x via an LLM Agent parameterized by θ. The traditional input-output method gives the result by y â ¼ pθ(y|x). With planning, Rec- Mind generates the result y â ¼ pθ(y|planing(x)), where planing(x) is a set of prompts that decomposes prob- lem x into a series sub-tasks that is composed of thought h, action a, and observation o. Figure 1 provides exam- ples of planning including thoughts, actions and observa- tions. We first review existing popular reasoning methods such as Chain-of-Thoughts and Tree-of-Thoughts which we have explored for RecMind. Then we present the proposed Self-Inspiring reasoning algorithm. All these planning meth- ods can be viewed as traversing through a latent reasoning tree, as shown in Figure 3.
2308.14296#14
2308.14296#16
2308.14296
[ "2302.13971" ]
2308.14296#16
RecMind: Large Language Model Powered Agent For Recommendation
3 Architecture As shown in Figure 2, the proposed RecMind consists of key components: LLM-powered API such as ChatGPT to drive the overall reasoning, planning which breaks down a task to smaller sub-tasks for step-by-step planning, memory which provides the agent with the capability to retain and re- call information over extended periods, and tools for obtain- ing relevant extra information from memory that is missing from the model weights and aiding the reasoning. We intro- duce the key components planning, memory and tools for RecMind in the subsequent parts.
2308.14296#15
2308.14296#17
2308.14296
[ "2302.13971" ]
2308.14296#17
RecMind: Large Language Model Powered Agent For Recommendation
â ¢ Chain-of-Thoughts (CoT) (Wei et al. 2022) has been used in ReAct (Yao et al. 2022) to synergize reasoning and action. This CoT planning method follows a single path in the reasoning tree. In our setting, at each time step t, the agent receives observation ot followed by thought ht and action at. Let st = (ht, at, ot) denote the Rec- Mind state at step t. The CoT planning method gener- ates the next state st+1 = (ht+1, at+1, ot+1) by sam- pling pθ(st+1|x, s1, .., st). Thus CoT only follows a sin- gle planning path S = {s1, ..., st, ..., sT } until reach- ing the final result y â ¼ pθ(y|x, s1, ..., st, ..., sT ) after T steps. Step | Action Observation a7 qd of gf 8 2) i) (a) Tree-of-Thoughts (DFS) (b) Self-Inspiring Figure 3: Comparison between Tree-of-Thoughts DFS and Self-Inspiring. Red arrows in the figure indicate the process for generating alternative thoughts at intermediate steps. Blue dashed arrows in the figure denote the backtracking process. Tree-of-Thoughts (ToT) (Yao et al. 2023) extends CoT to explore multiple paths in the reasoning tree. At step t and state st, ToT-BFS explicitly generates mul- tiple candidates {s1 t+1} for next state by i.i.d. sampling si t+1 â ¼ pθ(st+1|x, s1, .., st) for i â [k]. Then it applies majority vote to select the state st+1 from {s1 t+1}. Eventually ToT-BFS generates a single path similar to CoT. In contrast, ToT-DFS ex- plores one branch at a time, but might prune the cur- rent state, and backtracks to the previous state to start a new reasoning branch. Denote the first explored path as z(1) = {s(1) t+1}.
2308.14296#16
2308.14296#18
2308.14296
[ "2302.13971" ]
2308.14296#18
RecMind: Large Language Model Powered Agent For Recommendation
If the last state s(1) , s(1) t+1 is pruned and it backtracks to the previous state s(1) , and starts a new reasoning branch, then the path be- comes z(2) = {s(1) t+1, ...}. After exploring n branches, we denote the final path of ToT as z(n) = {s1, ..., s(1) T } and the final result y is ob- j1 tained by y ⠼ pθ(x, z(n)). We find the discarded historical states from previously explored branches such as s(1) t+1 from branch z(1) usually contain helpful information for RecMind to generate a bet- ter state compared with only considering the final path of ToT. Thus, we propose Self-Inspiring (SI) as shown in Fig- ure 3(b) and Algorithm 1, a new planning method for Rec- Mind. SI inspires itself into exploring an alternative reason- ing branch, while retaining all previous states. At m-th path and step t, SI generates the next step of planning by consid- ering all previous paths, i.e., s(m) t+1 ⠼ pθ(st+1|z(1), ..., z(m)). After exploring n paths, the RecMind obtains the final re- sult y ⠼ Pθ(x, z(1), ..., z(n)).
2308.14296#17
2308.14296#19
2308.14296
[ "2302.13971" ]
2308.14296#19
RecMind: Large Language Model Powered Agent For Recommendation
Figure 3 provides an example to illustrate the key difference between ToT and SI. In ToT (Figure 3(a)), The new state N (2) at the second path is gen- erated by only considering state N â 1. The state N (1) is discarded. However, in SI (Figure 3(b)), the new state N (2) is generated based on both N â 1 and N (1). Memory Information stored in memory, including Person- # Algorithm 1: Self-Inspiring Planning the current planning path S = {z(1), ..., z(mâ 1), s(m) } at step t, LLM pθ, and step limit T . Let inspire(·) be the API check- ing if the planning should explore an alternative reason- ing branch. 1: while t â ¤ T do Sample s(m) 2: if h(m) t+1, a(m) t+1 is "End of Planning" then break t+1 = (h(m) t+1, o(m) t+1) â ¼ pθ(·|x, S) 3: 4: 5: 6: 7: end if Sâ ² â S â ª {s(m) t+1} if inspire({x, Sâ ²}) then Sample s(m+1) S â Sâ ² â ª {s(m+1) t+2 â ¼ pθ(·|x, S) 8: }, m â m + 1, t â t + 2 9: 10: 11: end if 12: 13: end while 14: return final response y â ¼ pθ(·|x, S) t+2 else S â Sâ ², t â t + 1
2308.14296#18
2308.14296#20
2308.14296
[ "2302.13971" ]
2308.14296#20
RecMind: Large Language Model Powered Agent For Recommendation
alized Memory and World Knowledge, enables the model to access knowledge beyond what is inherently present in the LLMâ s parameters. Using the Amazon Reviews dataset as an illustrative example, Personalized Memory includes individ- ualized user information, such as their reviews or ratings for a particular item. World Knowledge consists of two com- ponents: the first component is item metadata information, which also falls under the domain-specific knowledge cate- gory; the second component involves real-time information that can be accessed through Web search tool. In Figure 1, information of the product â
2308.14296#19
2308.14296#21
2308.14296
[ "2302.13971" ]
2308.14296#21
RecMind: Large Language Model Powered Agent For Recommendation
Sewak Al-Falahâ retrieved from world knowledge using a Web search tool, aids the reason- ing path and ultimately influences the final prediction. Tool Use By empowering LLMs to utilize tools, we can ac- cess vastly larger and dynamic knowledge bases, allowing us to tackle complex computational tasks. In RecMind system, weâ ve incorporated three such tools: â ¢ Database Tool: This tool translates natural language questions into SQL queries. Using this tool, the sys- tem can access domain-specific knowledge from mem- ory thatâ
2308.14296#20
2308.14296#22
2308.14296
[ "2302.13971" ]
2308.14296#22
RecMind: Large Language Model Powered Agent For Recommendation
s essential for the final prediction. For instance, in the Amazon Reviews dataset, it encompasses personal information such as a userâ s reviews or ratings for an item, as well as item metadata like the itemâ s descrip- tion, brand, and price. When the database tool is called, the agent will prompt a question, such as â What is the av- erage rating of product Sewak Al-Falah?â , based on the database schema. Next, an LLM is called to transfer the question into an executable SQL query. After obtaining the output of the SQL query, the output will be trans- ferred into a sentence of answer by an LLM and returned to the Agent.
2308.14296#21
2308.14296#23
2308.14296
[ "2302.13971" ]
2308.14296#23
RecMind: Large Language Model Powered Agent For Recommendation
â ¢ Search Tool: This tool employs a search engine (e.g., Google) to access real-time information. For instance, in the Amazon Reviews dataset, this tool could assist us in obtaining the most recent information about each item. When the Search tool is called, the agent will prompt a question asking for external meta information, which is usually not available in the database, such as â What is the product category of Sewak Al-Falah?â . Next, a search engine API will be called to search for the information and return it to the agent. â ¢ Text Summarization Tool: This tool helps summarize lengthy texts by invoking a text summarization model from the Hugging Face Hub. For example, within the Amazon Reviews dataset, this tool can produce a sum- marized description of an item by considering multiple reviews of that specific item from various users. It can generate summarization such as â
2308.14296#22
2308.14296#24
2308.14296
[ "2302.13971" ]
2308.14296#24
RecMind: Large Language Model Powered Agent For Recommendation
Most customers think this product is durable and has a good price.â , which can be easily used in different recommendation tasks related to the product. 4 Experiments In this section, we evaluate the performance of our proposed method in various recommendation related scenarios, i.e., rating prediction, sequential recommendation, direct recom- mendation, explanation generation, review summarization. First, we provide an overview of the datasets and evalua- tion metrics used in different recommendation tasks. Sub- sequently, we delineate the experimental settings specific in each recommendation scenario. 4.1 Experimental Settings Datasets and Evaluation Metrics Following P5 (Geng et al. 2022), we conduct experiments for rating prediction, sequential recommendation, direct recommendation, expla- nation generation, and review summarization on the Ama- zon Reviews (Ni, Li, and McAuley 2019) dataset. We evalu- ate our method and baselines on data in Sports & Outdoors, Beauty, as well as Toys & Games domains from Amazon Reviews. For a more comprehensive evaluation of our meth- ods, we also evaluate our method RecMind on Yelp (Geng et al. 2022) dataset. To quantitatively evaluate the proposed RecMind across various recommendation tasks, we employ different metrics. For rating prediction, we report Root Mean Square Error (RMSE) and Mean Absolute Error (MAE). In the case of se- quential and direct recommendations, we use metrics such as top-k Hit Ratio (HR@k) and top-k Normalized Discounted Cumulative Gain (NDCG@k), specifically reporting results on HR@5,10 and NDCG@5,10. In addition, for the as- sessment of explanation generation, review summarization and conversational recommendation, we use n-gram Bilin- gual Evaluation Understudy (BLEU-n) and n-gram Recall- Oriented Understudy for Gisting Evaluation (ROUGE-n). Implementation Details We gpt-3.5-turbo-16k (Schulman et al. 2022) as the core large language model in RecMind. To enable the access of the RecMind to in-domain knowledge, we store all the review data in to a MySQL database, consisting of a table with the product Table 1: Performance comparison in rating prediction on Amazon Reviews (Beauty) and Yelp.
2308.14296#23
2308.14296#25
2308.14296
[ "2302.13971" ]
2308.14296#25
RecMind: Large Language Model Powered Agent For Recommendation
Methods Beauty RMSE MAE Yelp RMSE MAE MF MLP P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.1973 1.3078 1.2982 1.4173 1.1589 1.2250 1.1326 1.1197 1.1205 1.1894 1.0756 0.9461 0.9597 0.8474 1.1897 0.7327 0.8612 0.7167 0.7059 0.7103 0.7883 0.6892 1.2645 1.2951 1.4685 1.6725 1.4725 1.5302 1.3925 1.3875 1.3826 1.4530 1.3674 1.0426 1.0340 1.0054 1.2359 1.0016 1.1673 0.9794 0.9766 0.9774 1.0009 0.9698 meta information and a table with the interaction history of all the users. 4.2 Compared Methods We compare the performance of our method with the fol- lowing baselines, including both LLM fine-tuning methods, such as P5 (Geng et al. 2022), and ChatGPT-based LLM prompting methods (Liu et al. 2023). In addition, we im- plement our RecMind with three different planning meth- ods, namely Chain-Of-Thoughts (CoT), Tree-of-Thoughts (ToT) (Yao et al. 2023), and the proposed Self-Inspiring(SI). In summary, the compared methods include: â ¢ P5 (Geng et al. 2022) unifies different recommenda- tion tasks into a shared generative large language model.
2308.14296#24
2308.14296#26
2308.14296
[ "2302.13971" ]
2308.14296#26
RecMind: Large Language Model Powered Agent For Recommendation
A collection of personalized prompts has been cre- ated for various recommendation-related tasks. All raw dataâ including user-item interactions, user descriptions, item metadata, and user reviewsâ are transformed into natural language sequences. Subsequently, the large lan- guage model is fine-tuned based on these sequences. â ¢ ChatGPT (Liu et al. 2023) is a powerful large language model developed by OpenAI. Liu et al. (2023) con- structs a benchmark to evaluate ChatGPTâ s performance in different recommendation tasks by designing specific prompts in both zero-shot and few-shot settings. In the zero-shot setting, the LLM is directly prompted for the final prediction, while in the few-shot setting, several in- context examples are provided. We name the ChatGPT baseline in these two settings as ChatGPT (zero-shot) and ChatGPT (few-shot).
2308.14296#25
2308.14296#27
2308.14296
[ "2302.13971" ]
2308.14296#27
RecMind: Large Language Model Powered Agent For Recommendation
â ¢ RecMind-CoT, where the planning is based on ReAct- CoT (Yao et al. 2022). ReAct is a novel prompt-based paradigm for general task solving. It extends Chain-Of- Thoughts (CoT) (Wei et al. 2022) to synergize reason- ing and acting with external tools. In our experiments, we adopt the same tools we used for the ReAct base- line. We also explore both zero-shot and few-shot for this method and name them as RecMind-CoT (zero-shot) and RecMind-CoT (few-shot).
2308.14296#26
2308.14296#28
2308.14296
[ "2302.13971" ]
2308.14296#28
RecMind: Large Language Model Powered Agent For Recommendation
â ¢ RecMind-ToT, where the planning is based on Tree- of-Thoughts (ToT) (Yao et al. 2023). ToT enables the exploration of coherent units of thought that serve as Table 2: Performance comparison in direct recommendation on Amazon Reviews (Beauty) and Yelp. Methods Beauty Yelp HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 BPR-MLP P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1392 0.1478 0.0146 0.0228 0.0497 0.0682 0.0734 0.0705 0.0675 0.0915 0.0848 0.1003 0.0107 0.0157 0.0325 0.0387 0.0402 0.0407 0.0524 0.0624 0.2542 0.2159 0.0705 0.0903 0.1129 0.1345 0.1355 0.1302 0.1259 0.1559 0.1215 0.1289 0.0235 0.0362 0.0637 0.0814 0.0808 0.0812 0.0923 0.1063 0.1876 0.2105 0.0479 0.0512 0.0992 0.1262 0.1649 0.1601 0.1055 0.1749 0.1184 0.1360 0.0265 0.0300 0.0719 0.0897 0.0920 0.0904 0.0791 0.0935 0.3066 0.3182 0.0751 0.0879 0.1673 0.1840 0.2217 0.2079 0.1674 0.2451 0.1566 0.1746 0.0326 0.0412 0.1170 0.1359 0.1503 0.1453 0.1293 0.1607
2308.14296#27
2308.14296#29
2308.14296
[ "2302.13971" ]
2308.14296#29
RecMind: Large Language Model Powered Agent For Recommendation
Table 3: Performance comparison in sequential recommendation on Amazon Reviews (Beauty) and Yelp. Methods Beauty Yelp HR@5 NDCG@5 HR@10 NDCG@10 HR@5 NDCG@5 HR@10 NDCG@10 S3-Rec P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0387 0.0459 0.0089 0.0179 0.0182 0.0349 0.0387 0.0365 0.0339 0.0415 0.0244 0.0347 0.0053 0.0124 0.0139 0.0187 0.0235 0.0211 0.0200 0.0289 0.0647 0.0603 0.0103 0.0256 0.0297 0.0486 0.0522 0.0497 0.0469 0.0574 0.0327 0.0411 0.0060 0.0125 0.0160 0.0302 0.0327 0.0355 0.0310 0.0375 0.0201 0.0565 0.0102 0.0217 0.0368 0.0427 0.0447 0.0455 0.0396 0.0471 0.0123 0.0389 0.0062 0.0116 0.0239 0.0305 0.0319 0.0328 0.0281 0.0342 0.0341 0.0702 0.0143 0.0320 0.0554 0.0590 0.0624 0.0622 0.0569 0.0635 0.0168 0.0441 0.0089 0.0165 0.0316 0.0380 0.0337 0.0349 0.0340 0.0407
2308.14296#28
2308.14296#30
2308.14296
[ "2302.13971" ]
2308.14296#30
RecMind: Large Language Model Powered Agent For Recommendation
intermediate steps toward problem-solving. We imple- ment RecMind-ToT with two search strategies in search- ing among the choices in intermediate steps, which are breadth-first search, named as RecMind-CoT (BFS, few- shot) and depth-first search, named as RecMind-CoT (DFS, few-shot). In addition to the above methods, we have considered different additional baselines for each task. The additional baselines are introduced in corresponding subsections. # 4.3 Experimental Results on Precision-oriented Recommendation Tasks We first evaluate the proposed RecMind and baselines on three precision-oriented recommendation tasks, i.e., rating prediction, sequential recommendation, and direct recom- mendation. Rating Prediction. Rating prediction is an essential task in recommendation systems that aims to predict the rating that a user would give to a particular item. In rating pre- diction, we further include baselines MF (Koren, Bell, and Volinsky 2009a) and MLP (Cheng et al. 2016) trained with mean square root loss baselines. The results of rating pre- diction on Amazon Reviews (beauty domain) and Yelp are shown in Table 1. The results show that RecMind with dif- ferent types of planning mechanisms usually outperforms the fully-trained models for rating prediction tasks. Such im- provement mainly stems from the fact that RecMind has ac- cess to both the rating history of the user given to different items and the rating history of the item received from differ- ent users in the database. On the other side, fully trained models such as MLP and P5 usually have much higher RMSE, which can be attributed to the over-fitting on the training data.
2308.14296#29
2308.14296#31
2308.14296
[ "2302.13971" ]
2308.14296#31
RecMind: Large Language Model Powered Agent For Recommendation
Direct Recommendation. In the scenario of the direct recommendation, the RecMind predicts the recommended items from a candidate set of 100 items from the same dataset, where only one candidate is positive. Figure 2 shows an example of direct recommendation in the beauty domain of Amazon Reviews. For a specific user {userID} with a list of products, the agent will be prompted, â From the item candidates listed, choose the top 10 items to recommend to the user {userID} and rank them in order of priority from highest to lowest. Candidates: [â Item Listâ ]â .
2308.14296#30
2308.14296#32
2308.14296
[ "2302.13971" ]
2308.14296#32
RecMind: Large Language Model Powered Agent For Recommendation
In this task, we include additional baselines BPR-MLP (Cheng et al. 2016). Before evaluating each test data, we remove the interaction history between the positive item and the user to avoid data leakage. The results on direct recommendation are shown in Table 2. The results show that fully-trained models such as P5 usually perform better than RecMind. The main reason of the performance gap is the long context of the names of 100 candidate items. Specifically, the LLM agent tends to first re- trieve information related to items positioned in front of the candidate list. Such positional bias has also been observed in previous works (Liu et al. 2023). Table 2 shows that di- verse reasoning planning, such as tree-of-thoughts and our proposed self-inspiring, can alleviate this issue by gradually filtering out less-possible items. However, it is still hard for LLMs to fully explore the chances of all candidates, espe- cially with limitations on prompt context length. Sequential Recommendation. For sequential recom- mendation, the Agent takes the names of the userâ s histor- ically interacted items in order as input. Then the agent is prompted to predict the title of the next item that the user might interact with. Figure 2 shows an example of sequential recommendation in the beauty domain of Amazon Reviews. For a specific user {userID} with the interaction history Table 4: Performance comparison on explanation generation on Amazon Reviews (Beauty) and Yelp.
2308.14296#31
2308.14296#33
2308.14296
[ "2302.13971" ]
2308.14296#33
RecMind: Large Language Model Powered Agent For Recommendation
Methods Beauty Yelp BLEU2 ROGUE1 ROGUE2 ROGUEL BLEU2 ROGUE1 ROGUE2 ROGUEL P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.9783 0.0359 1.1766 0.8985 1.3096 1.3054 1.3159 1.1589 1.3459 17.0412 9.7892 11.8905 11.0597 12.7987 12.8249 12.8975 11.6794 13.2560 1.8962 0.7994 2.5894 1.9675 2.7015 2.7050 2.7125 2.2460 2.7479 12.1709 5.1215 5.8920 7.7471 8.0164 8.0596 8.1150 7.8974 8.9614 1.2784 0.0419 1.1766 1.1052 1.2759 1.2960 1.2896 1.1589 1.3094 18.1924 8.9776 12.0901 12.5719 13.9690 14.1728 14.2201 11.6794 14.4220 2.9517 0.8549 3.2170 2.1941 3.0173 3.4539 3.6710 2.2460 3.8974 13.2315 6.1715 6.7823 7.7471 9.1081 9.6125 9.6719 7.8974 9.7125 Table 5: Performance comparison on review summarization on Amazon Reviews (Beauty).
2308.14296#32
2308.14296#34
2308.14296
[ "2302.13971" ]
2308.14296#34
RecMind: Large Language Model Powered Agent For Recommendation
Methods Beauty BLEU2 ROGUE1 ROGUE2 ROGUEL P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.0357 0.6532 0.9137 1.3596 1.3786 1.3737 1.3798 1.3688 1.4014 8.3079 3.8579 4.0179 5.0279 5.5397 5.4187 5.5794 5.4579 6.0354 1.5892 0.3059 0.4179 0.7156 0.8456 0.8254 0.8351 0.8974 1.0128 7.4820 3.3552 3.6790 4.7689 4.8024 4.8157 4.8976 4.9746 5.5716 formance of RecMind in summarizing review comments to shorter review titles. We filter out test data with automati- cally generated review titles such as â Five Starsâ . Figure 2 shows an example of review summarization in the beauty domain of Amazon Reviews. The results of the review sum- marization on Amazon Reviews are shown in Table 5. The result shows that RecMind agent performs better that recent LLM such as ChatGPT. However, RecMind does not outper- form P5 regarding the review summarization. This perfor- mans comes from the advantage of P5 which fully trained model towards optimizaing the review summarization task. In contrast, GPT-based models, such as RecMind, usually prioritize generating summaries after deeply understanding the reviews.
2308.14296#33
2308.14296#35
2308.14296
[ "2302.13971" ]
2308.14296#35
RecMind: Large Language Model Powered Agent For Recommendation
in chronological order, the agent will be prompted, â user {userID} has interacted with the following items in chrono- logical order: [â Item Listâ ]. Please recommend the next item that the user might interact with. Choose the top 10 prod- ucts to recommend in order of priority, from highest to low- est.â . We include another baseline S3-Rec (Zhou et al. 2020) which leverages self-supervised objectives to help sequen- tial recommendation model better discover the correlations among different items and their attributes. The results of se- quential recommendation on Amazon Reviews (beauty do- main) and Yelp are shown in Table 3. It is observed from the results that RecMind with Self-Inspiring achieves com- parable performance as fully-trained models P5 and S3-Rec. Without diverse planning methods such as tree-of-thoughts and our proposed self-inspiring, LLMs prefer items whose names are semantically similar to the names of proceeding items. In contrast, with the help of explicit reasoning meth- ods as well as access to domain knowledge, RecMind grad- ually explores helpful information such as connections of items in the database with other usersâ interaction history. # 4.4 Experimental Results on Explainability-oriented Recommendation Tasks With the development of NLP techniques on recommenda- tion tasks, recent works (Geng et al. 2022) start to explore how NLP models can improve the explainability of recom- mendation systems, such as generating text explanations for a given recommendation, or a given interaction between a user and an item. In this section, we evaluate the perfor- mance of RecMind in two explainability-oriented recom- mendation tasks, which are explanation generation and re- view summarization. Explanation Generation. In explanation generation, we assess the performance of RecMind in crafting textual expla- nations that justify a userâ s interaction with a specific item. Figure 2 shows an example of explanation generation in the beauty domain of Amazon Reviews. The text review given by the user on the given item is taken as the ground truth. The results of explanation generation on Amazon Reviews and Yelp are summarized in Table 4. The results indicate that RecMind, when leveraging self-inspiring techniques, can achieve performance comparable to the fully trained P5 model.
2308.14296#34
2308.14296#36
2308.14296
[ "2302.13971" ]
2308.14296#36
RecMind: Large Language Model Powered Agent For Recommendation
This is aided by the in-domain knowledge retrieved from personalized memory, such as reviews from other users on the same item. # 4.5 Transfer to Items in Unseen Domains The advantage of using a large language model as a unified recommendation model is that it can judge the likelihood of any event by expressing the event in natural language. In our experiments in Section 4.3, we found that RecMind with in-domain few-shot examples achieves much better perfor- mance. In this section, we aim to test how few-shot Rec- Mind performs on recommending items from unseen do- mains. Specifically, we include few-shot examples in the Beauty domain and test the performance of RecMind on rat- ing prediction, direct recommendation, and explanation gen- eration with test data in the Toys and Sports domain. We in- clude ChatGPT prompting baseline and P5 for comparisons. In the few-shot ChatGPT baseline, the user-specific exam- ples included in the prompts are from the Beauty domain. In the P5, the model trained on the Beauty domain is used for evaluation. We evaluate the domain transfer capabilities of all approaches on rating prediction, direct recommenda- Review Summarization. In this task, we evaluate the per- Table 6: Performance on domain transfer. Comparisons are performed on MAE for rating prediction, HR@5 for direct recommendation, and BLEU2 for explanation generation. Methods P5 ChatGPT RecMind-ToT RecMind-SI Domain Beauty â Toys Beauty â Sports Beauty â Toys Beauty â Sports Beauty â Toys Beauty â Sports Beauty â Toys Beauty â Sports MAE 0.7932 0.7013 0.7354 0.6895 0.6845 0.6457 0.6779 0.6245 HR@5 BLEU2 0.0852 0.1007 0.0649 0.7210 0.0841 0.0924 0.0902 0.1124 1.4326 0.8924 1.4416 0.8795 1.3994 1.0002 1.5940 1.0537
2308.14296#35
2308.14296#37
2308.14296
[ "2302.13971" ]
2308.14296#37
RecMind: Large Language Model Powered Agent For Recommendation
tion, and explanation generation. We report the MAE for rat- ing prediction, HR@5 for direct recommendation, and the BLEU2 for explanation in Table 6. It can be observed that RecMind shows better domain transfer performance com- pared with the baselines P5 and ChatGPT. In contrast, fine- tuned language model P5 tends to overfit to the domain of the training data. 4.6 Human Evaluation In this section, we leverage human evaluation to assess the quality and rationality of the explanation generated by Rec- Mind. Three human evaluators (Eva 1, Eva 2, Eva 3) are asked to rank the explanations generated by P5, few-shot ChatGPT, few-shot RecMind with tree-of-thoughts, few- shot RecMind with self-inspiring and the ground truth on 100 test data. We show the top-1 ratios on results gener- ated by different methods in Table 7 for each evaluator. The top-1 ratio indicates the proportion of test data where the given method ranks first compared to other methods based on each annotatorâ s selection. We also calculate the aver- age top-1 ratios of all three evaluators on results generated by each method. Although annotators may have individual subjectivity, evaluations by different evaluators consistently show that the few-shot RecMind based on self-inspiring, i.e., RecMind-SI yields the most satisfactory results.
2308.14296#36
2308.14296#38
2308.14296
[ "2302.13971" ]
2308.14296#38
RecMind: Large Language Model Powered Agent For Recommendation
Table 7: Human evaluation results on explanation genera- tion. Methods Evaluator Average Eva 1 Eva 2 Eva 3 Ground Truth P5 ChatGPT RecMind-ToT RecMind-SI 0.12 0.02 0.15 0.29 0.42 0.13 0.06 0.23 0.28 0.30 0.22 0.03 0.18 0.25 0.32 0.157 0.037 0.187 0.273 0.347 5 Conclusions In this work, we propose a novel LLM-powered autonomous agent RecMind for various recommendation tasks. The Rec- Mind consists of three major components, i.e., planning which breaks down a task into smaller sub-tasks, memory which provides the agent with the capability to retain and recall information over extended periods, and tools for ob- taining relevant extra information from memory that is miss-
2308.14296#37
2308.14296#39
2308.14296
[ "2302.13971" ]
2308.14296#39
RecMind: Large Language Model Powered Agent For Recommendation
ing from model weights. We further propose a novel plan- ning technique self-inspiring, which can integrate the merits of multiple reasoning paths for better planning. We evalu- ate RecMind across various recommendation tasks, includ- ing both precision-oriented tasks and explanability-oriented tasks. The evaluation results show that RecMind with self- inspiring outperforms existing LLM-based recommendation methods in different recommendation tasks and achieves competitive performance to a recent model P5, which is fully pre-trained for the recommendation task. References Anil, R.; Dai, A. M.; Firat, O.; Johnson, M.; Lepikhin, D.; Passos, A.; Shakeri, S.; Taropa, E.; Bailey, P.; Chen, Z.; et al. 2023. Palm 2 technical report. arXiv preprint arXiv:2305.10403. Bao, K.; Zhang, J.; Zhang, Y.; Wang, W.; Feng, F.; and He, X. 2023.
2308.14296#38
2308.14296#40
2308.14296
[ "2302.13971" ]
2308.14296#40
RecMind: Large Language Model Powered Agent For Recommendation
Tallrec: An effective and efficient tuning frame- work to align large language model with recommendation. arXiv preprint arXiv:2305.00447. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877â
2308.14296#39
2308.14296#41
2308.14296
[ "2302.13971" ]
2308.14296#41
RecMind: Large Language Model Powered Agent For Recommendation
1901. Chase, H. 2023. langchain. GitHub repository. Cheng, H.-T.; Koc, L.; Harmsen, J.; Shaked, T.; Chandra, T.; Aradhye, H.; Anderson, G.; Corrado, G.; Chai, W.; Ispir, M.; et al. 2016. Wide & deep learning for recommender sys- tems. In Proceedings of the 1st workshop on deep learning for recommender systems, 7â
2308.14296#40
2308.14296#42
2308.14296
[ "2302.13971" ]
2308.14296#42
RecMind: Large Language Model Powered Agent For Recommendation
10. Fan, W.; Zhao, Z.; Li, J.; Liu, Y.; Mei, X.; Wang, Y.; Tang, J.; and Li, Q. 2023. Recommender systems in the era of large language models (llms). arXiv preprint arXiv:2307.02046. Geng, S.; Liu, S.; Fu, Z.; Ge, Y.; and Zhang, Y. 2022. Rec- ommendation as language processing (rlp): A unified pre- train, personalized prompt & predict paradigm (p5). In Pro- ceedings of the 16th ACM Conference on Recommender Sys- tems, 299â 315. Gravitas, S. 2023. Auto-GPT. GitHub repository. He, X.; Deng, K.; Wang, X.; Li, Y.; Zhang, Y.; and Wang, M. 2020. Lightgcn: Simplifying and powering graph convo- lution network for recommendation. In Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 639â 648. Hidasi, B.; Karatzoglou, A.; Baltrunas, L.; and Tikk, D. 2015. Session-based recommendations with recurrent neural networks. arXiv preprint arXiv:1511.06939. Hou, Y.; Zhang, J.; Lin, Z.; Lu, H.; Xie, R.; McAuley, J.; and Zhao, W. X. 2023. Large language models are zero-shot rankers for recommender systems. arXiv preprint arXiv:2305.08845. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023.
2308.14296#41
2308.14296#43
2308.14296
[ "2302.13971" ]
2308.14296#43
RecMind: Large Language Model Powered Agent For Recommendation
Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint arXiv:2305.06474. Koren, Y.; Bell, R.; and Volinsky, C. 2009a. Matrix fac- torization techniques for recommender systems. Computer, 42(8): 30â 37. Koren, Y.; Bell, R. M.; and Volinsky, C. 2009b. Matrix Factorization Techniques for Recommender Systems.
2308.14296#42
2308.14296#44
2308.14296
[ "2302.13971" ]
2308.14296#44
RecMind: Large Language Model Powered Agent For Recommendation
Com- puter, 42. Lin, J.; Dai, X.; Xi, Y.; Liu, W.; Chen, B.; Li, X.; Zhu, C.; Guo, H.; Yu, Y.; Tang, R.; and Zhang, W. 2023. How Can Recommender Systems Benefit from Large Language Mod- els: A Survey. ArXiv, abs/2306.05817. Linden, G.; Smith, B.; and York, J. 2003.
2308.14296#43
2308.14296#45
2308.14296
[ "2302.13971" ]
2308.14296#45
RecMind: Large Language Model Powered Agent For Recommendation
Amazon.com Rec- ommendations: Item-to-Item Collaborative Filtering. IEEE Distributed Syst. Online, 4. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. B. 2023. Is ChatGPT a Good Recommender? A Preliminary Study. ArXiv, abs/2304.10149. Nakajima, Y. 2023. babyagi. GitHub repository. Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; et al. 2021.
2308.14296#44
2308.14296#46
2308.14296
[ "2302.13971" ]
2308.14296#46
RecMind: Large Language Model Powered Agent For Recommendation
Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. Ni, J.; Li, J.; and McAuley, J. 2019. Justifying recommen- dations using distantly-labeled reviews and fine-grained as- In Proceedings of the 2019 conference on empiri- pects. cal methods in natural language processing and the 9th in- ternational joint conference on natural language processing (EMNLP-IJCNLP), 188â 197. OpenAI, R. 2023. GPT-4 technical report. arXiv, 2303â
2308.14296#45
2308.14296#47
2308.14296
[ "2302.13971" ]
2308.14296#47
RecMind: Large Language Model Powered Agent For Recommendation
08774. Park, J. S.; Oâ Brien, J. C.; Cai, C. J.; Morris, M. R.; Liang, P.; and Bernstein, M. S. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442. Patil, S. G.; Zhang, T.; Wang, X.; and Gonzalez, J. E. 2023. Gorilla: Large language model connected with massive apis. arXiv preprint arXiv:2305.15334. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023.
2308.14296#46
2308.14296#48
2308.14296
[ "2302.13971" ]
2308.14296#48
RecMind: Large Language Model Powered Agent For Recommendation
Toolformer: Language Models Can Teach Themselves to Use Tools. ArXiv, abs/2302.04761. Schulman, J.; Zoph, B.; Kim, C.; Hilton, J.; Menick, J.; Weng, J.; Uribe, J. F. C.; Fedus, L.; Metz, L.; Pokorny, M.; et al. 2022. ChatGPT: Optimizing language models for dia- logue. OpenAI blog. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and Zhuang, Y. 2023.
2308.14296#47
2308.14296#49
2308.14296
[ "2302.13971" ]
2308.14296#49
RecMind: Large Language Model Powered Agent For Recommendation
Hugginggpt: Solving ai tasks with chatgpt and its friends in huggingface. arXiv preprint arXiv:2303.17580. Sun, F.; Liu, J.; Wu, J.; Pei, C.; Lin, X.; Ou, W.; and Jiang, P. 2019. BERT4Rec: Sequential recommendation with bidi- rectional encoder representations from transformer. In Pro- ceedings of the 28th ACM international conference on infor- mation and knowledge management, 1441â
2308.14296#48
2308.14296#50
2308.14296
[ "2302.13971" ]
2308.14296#50
RecMind: Large Language Model Powered Agent For Recommendation
1450. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b.
2308.14296#49
2308.14296#51
2308.14296
[ "2302.13971" ]
2308.14296#51
RecMind: Large Language Model Powered Agent For Recommendation
Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, L.; and Lim, E.-P. 2023. Zero-Shot Next-Item Rec- ommendation using Large Pretrained Language Models. ArXiv, abs/2304.03153. Wei, J.; Bosma, M.; Zhao, V. Y.; Guu, K.; Yu, A. W.; Lester, B.; Du, N.; Dai, A. M.; and Le, Q. V. 2021. Finetuned arXiv preprint language models are zero-shot learners. arXiv:2109.01652. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; hsin Chi, E. H.; Xia, F.; Le, Q.; and Zhou, D. 2022.
2308.14296#50
2308.14296#52
2308.14296
[ "2302.13971" ]
2308.14296#52
RecMind: Large Language Model Powered Agent For Recommendation
Chain of Thought Prompting Elicits Reasoning in Large Language Models. ArXiv, abs/2201.11903. Yang, F.; Chen, Z.; Jiang, Z.; Cho, E.; Huang, X.; and Lu, Y. 2023. PALR: Personalization Aware LLMs for Recommen- dation. arXiv e-prints, arXivâ 2305. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023.
2308.14296#51
2308.14296#53
2308.14296
[ "2302.13971" ]
2308.14296#53
RecMind: Large Language Model Powered Agent For Recommendation
Tree of Thoughts: Deliber- ate Problem Solving with Large Language Models. ArXiv, abs/2305.10601. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2022. React: Synergizing reasoning and act- ing in language models. arXiv preprint arXiv:2210.03629. Zhou, K.; Wang, H.; Zhao, W. X.; Zhu, Y.; Wang, S.; Zhang, F.; Wang, Z.; and Wen, J.-R. 2020. S3-rec: Self-supervised learning for sequential recommendation with mutual infor- mation maximization. In Proceedings of the 29th ACM in- ternational conference on information & knowledge man- agement, 1893â
2308.14296#52
2308.14296#54
2308.14296
[ "2302.13971" ]
2308.14296#54
RecMind: Large Language Model Powered Agent For Recommendation
1902. # A Appendix A.1 Ablation Study on Foundation LLMs In this section, we study how RecMind performs with differ- ent types of foundation LLMs as controllers. We test Rec- Mind with self-inspiring based on three different types of LLMs, including GPT-3.5, text-davinci-003, and GPT-4 for sequential recommendation on three different domains in Amazon Reviews. The results are illustrated in Figure 4. It can be observed from the results that the performance of RecMind is not sensitive to the selection of Foundation LLMs. Although GPT-4 demonstrates enhanced reasoning in addressing complex problems, GPT-3.5 can also deliver commendable performance when leveraging the superior ca- pabilities of the RecMind framework. 0.07 Hy GPT-3.5 0.06; GG text-davinci-003 ay GPT-4 wp 0.05 © [4 = 0.04 . i 0.02 Beauty Sports Toys Figure 4: Performance comparison of RecMind-SI with dif- ferent types of foundation LLMs. # A.2 Additional Experiment Results on Amazon Reviews In this section, we provide additional experiment results of RecMind and all compared methods on the Sports domain and Toys domain in Amazon Reviews. The results in rat- ing prediction on the Sports and Toys domains of Amazon Reviews are shown in Table 8. The results in the direct rec- ommendation on the Sports and Toys domains of Amazon Reviews are shown in Table 9 and Table 10, respectively. The results in the direct recommendation on the Sports and Toys domains of Amazon Reviews are shown in Table 11 and Table 12, respectively. As indicated in the experimen- tal results, RecMind also shows good performance on data from other domains of Amazon Reviews. Table 8: Performance comparison in rating prediction on Sports and Toys domains of Amazon Reviews.
2308.14296#53
2308.14296#55
2308.14296
[ "2302.13971" ]
2308.14296#55
RecMind: Large Language Model Powered Agent For Recommendation
Methods Sports RMSE MAE Toys RMSE MAE MF MLP P5 (fine-tuned,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.0274 1.1277 1.0534 1.2723 1.0929 1.1490 1.0325 1.0307 1.0545 1.1230 1.0124 0.7975 0.7626 0.6784 1.0637 0.6957 0.8042 0.6446 0.6289 0.6433 0.7913 0.6122 1.0193 1.1215 1.0625 1.3213 1.0519 1.1680 1.0403 1.0279 1.0196 1.1412 1.0086 0.8024 0.8097 0.7134 1.0117 0.7047 0.8232 0.6905 0.6823 0.6801 0.8103 0.6712 Table 9: Performance comparison in direct recommendation and sequential recommendation on Sports domain of Ama- zon Reviews.
2308.14296#54
2308.14296#56
2308.14296
[ "2302.13971" ]
2308.14296#56
RecMind: Large Language Model Powered Agent For Recommendation
Methods Sports HR@5 NDCG@5 HR@10 NDCG@10 Direct Recommendation BPR-MLP P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1520 0.1765 0.0376 0.0388 0.0607 0.0782 0.0874 0.0815 0.0835 0.1115 0.0927 0.1196 0.0317 0.0267 0.0435 0.0527 0.0542 0.0557 0.0684 0.0814 0.2671 0.2235 0.0902 0.1003 0.1259 0.1475 0.1475 0.1412 0.1379 0.1769 0.1296 0.1325 0.0459 0.0502 0.0757 0.1034 0.1218 0.1272 0.1103 0.1303 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0251 0.0357 0.0039 0.0130 0.0135 0.0300 0.0338 0.0316 0.0290 0.0366 0.0161 0.0289 0.0008 0.0075 0.0090 0.0138 0.0186 0.0162 0.0151 0.0240 0.0385 0.0416 0.0051 0.0207 0.0248 0.0437 0.0473 0.0448 0.0420 0.0525 0.0204 0.0324 0.0008 0.0070 0.0105 0.0247 0.0272 0.0260 0.0255 0.0320
2308.14296#55
2308.14296#57
2308.14296
[ "2302.13971" ]
2308.14296#57
RecMind: Large Language Model Powered Agent For Recommendation
Table 10: Performance comparison in direct recommenda- tion and sequential recommendation on Toys domain of Amazon Reviews. Methods Toys HR@5 NDCG@5 HR@10 NDCG@10 Direct Recommendation BPR-MLP P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.1142 0.1278 0.0114 0.0130 0.0399 0.0580 0.0636 0.0603 0.0577 0.0813 0.0688 0.0743 0.0075 0.0059 0.0233 0.0295 0.0300 0.0315 0.0432 0.0532 0.2077 0.1859 0.0638 0.0805 0.1031 0.1247 0.1257 0.1204 0.1161 0.1461 0.0988 0.1089 0.0191 0.0270 0.0542 0.0719 0.0813 0.0817 0.0828 0.0998 Sequential Recommendation S3-Rec P5 (pre-trained,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 0.0443 0.0612 0.0192 0.0282 0.0285 0.0452 0.0490 0.0468 0.0442 0.0518 0.0294 0.0524 0.0158 0.0231 0.0246 0.0294 0.0342 0.0318 0.0307 0.0396 0.0700 0.0702 0.0212 0.0367 0.0408 0.0597 0.0633 0.0608 0.0580 0.0685 0.0376 0.0569 0.0165 0.0230 0.0265 0.0407 0.0432 0.0420 0.0415 0.0480
2308.14296#56
2308.14296#58
2308.14296
[ "2302.13971" ]
2308.14296#58
RecMind: Large Language Model Powered Agent For Recommendation
Table 11: Performance comparison on review summariza- tion and explanation generation on Sports domain of Ama- zon Reviews. Methods Sports BLEU2 ROGUE1 ROGUE2 ROGUEL Review Summarization P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.5874 0.9024 1.2579 1.5840 1.6014 1.7125 1.6542 1.6120 1.7388 11.8971 5.7402 6.3190 6.5310 6.7125 6.7986 6.6540 6.6259 6.8130 3.0257 1.2493 1.4257 1.4390 1.5479 1.5724 1.5639 1.5029 1.6217 10.5472 3.6791 3.8912 5.0140 5.2175 5.3794 5.2960 5.1891 5.5632 Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.1412 0.0611 1.2358 0.9687 1.3874 1.3765 1.4018 1.2374 1.4287 14.0329 7.2892 9.6405 8.3097 11.0487 11.5749 11.6475 9.4294 12.0060 2.1279 0.9921 2.8723 2.1320 3.0216 2.8023 3.0107 2.5405 3.0481 11.1894 5.6923 6.2824 7.1427 8.1146 8.4256 8.6032 8.2120 9.5812
2308.14296#57
2308.14296#59
2308.14296
[ "2302.13971" ]
2308.14296#59
RecMind: Large Language Model Powered Agent For Recommendation
Table 12: Performance comparison in review summarization and explanation generation on Toys domain in Amazon Re- views. Methods Toys BLEU2 ROGUE1 ROGUE2 ROGUEL Review Summarization P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 1.8760 0.5941 0.8420 1.1579 1.2394 1.2668 1.2515 1.1897 1.2974 9.0351 4.4571 4.8179 5.7276 6.3395 6.3186 6.2791 6.2578 6.8352 1.5230 0.4052 0.3178 0.7158 0.9453 0.9251 0.9356 0.8976 1.1125 8.1746 4.0612 4.2889 5.5691 5.8123 5.6159 5.5976 5.8724 6.2718 Explanation Generation P5 (pre-trained expert,few-shot) ChatGPT (zero-shot) ChatGPT (few-shot) RecMind-CoT (zero-shot) RecMind-CoT (few-shot) RecMind-ToT (BFS, few-shot) RecMind-ToT (DFS, few-shot) RecMind-SI (zero-shot) RecMind-SI (few-shot) 2.2850 0.1379 2.0169 2.1354 2.4079 2.4565 2.4152 2.2740 2.4674 15.0416 9.7892 11.8905 11.0597 12.7987 12.8249 12.8975 11.6794 13.2560 3.6798 1.5416 3.2049 2.7590 3.5146 3.6327 3.6079 2.2460 3.6920 12.1065 5.3158 6.2689 7.1445 7.4153 7.6234 7.7112 7.2536 7.9987
2308.14296#58
2308.14296
[ "2302.13971" ]
2308.13724#0
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
3 2 0 2 g u A 6 2 ] O R . s c [ 1 v 4 2 7 3 1 . 8 0 3 2 : v i X r a # ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning # Zhehua Zhou University of Alberta [email protected] Jiayang Song University of Alberta [email protected] # Kunpeng Yao Swiss Federal Institute of Technology Lausanne (EPFL) [email protected] Zhan Shu University of Alberta [email protected] # Lei Ma The University of Tokyo University of Alberta [email protected] # Abstract Motivated by the substantial achievements observed in Large Language Models (LLMs) in the field of natural language processing, recent research has commenced investigations into the application of LLMs for complex, long-horizon sequential task planning challenges in robotics. LLMs are advantageous in offering the poten- tial to enhance the generalizability as task-agnostic planners and facilitate flexible interaction between human instructors and planning systems. However, task plans generated by LLMs often lack feasibility and correctness. To address this challenge, we introduce ISR-LLM, a novel framework that improves LLM-based planning through an iterative self-refinement process. The framework operates through three sequential steps: preprocessing, planning, and iterative self-refinement. During preprocessing, an LLM translator is employed to convert natural language input into a Planning Domain Definition Language (PDDL) formulation. In the plan- ning phase, an LLM planner formulates an initial plan, which is then assessed and refined in the iterative self-refinement step by using a validator. We exam- ine the performance of ISR-LLM across three distinct planning domains. The results show that ISR-LLM is able to achieve markedly higher success rates in task accomplishments compared to state-of-the-art LLM-based planners. More- over, it also preserves the broad applicability and generalizability of working with natural language instructions. The code related to this work is available at https://github.com/zhehuazhou/ISR-LLM. # 1 Introduction
2308.13724#1
2308.13724
[ "2211.09935" ]
2308.13724#1
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Large Language Models (LLMs), underpinned by deep learning architectures, have recently rev- olutionized artificial intelligence (AI) by demonstrating unprecedented abilities in understanding, generating, and manipulating natural language text Bommasani et al. (2021); Brown et al. (2020); Devlin et al. (2018); Radford et al. (2019); Raffel et al. (2020). This surge in LLM research has been accompanied by a growing interest in leveraging these models to tackle a diverse array of challenges across various research fields, including data analysis Agrawal et al. (2022), code genera-
2308.13724#0
2308.13724#2
2308.13724
[ "2211.09935" ]
2308.13724#2
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Preprint. Under review. tion Vaithilingam et al. (2022), reasoning Zelikman et al. (2022), robotic control Ahn et al. (2022), and so on. Due to their rich internalized knowledge about the world Petroni et al. (2019); Davison et al. (2019), LLMs have also garnered considerable attention within the field of long-horizon sequential task plan- ning Roijers et al. (2013). Unlike short-term robotic planning problems, long-horizon sequential task planning often involves devising interconnected actions that are spanned over extended timeframes to achieve control objectives. Since the execution of actions at one point in time can greatly impact subsequent actions and outcomes, long-horizon planning is usually considered a more challenging problem due to its inherent intricacy in managing temporal dependencies and combinatorial com- plexity Hartmann et al. (2022), thereby necessitating innovative planning approaches that are able to balance the trade-offs between efficiency, optimality, and adaptability. The traditional way to address long-horizon sequential task planning typically relies on first estab- lishing a symbolic and logic-based representation of the planning problem Haslum et al. (2019), and then employing techniques such as state space search Zhang (1999) or heuristic search Edelkamp and Schrödl (2011) to find a feasible solution. However, this method usually requires the manual specification of symbolic planning domains, which demands a notable degree of expertise in the field. Furthermore, many desirable properties of plans, e.g., user preferences, which can be specified in natural language by individuals without specialized training, may prove intricate or even infeasible to be encapsulated within formal logic frameworks. As a result, the adaptability of conventional methods is constrained, limiting their utility in diverse contexts. To overcome this limitation, there is a growing trend in recent studies to explore the potential of utilizing LLMs as task-agnostic reasoning modules, with the aim of facilitating more generalized and intelligent robotic planning Ahn et al. (2022); Huang et al. (2022c). Leveraging their pre- trained knowledge, these LLM-based planners are able to effectively comprehend both explicit human-generated natural language directives and the inherent constraints interwoven within planning tasks Huang et al. (2022a).
2308.13724#1
2308.13724#3
2308.13724
[ "2211.09935" ]
2308.13724#3
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
This greatly reduces the necessity for labor-intensive manual rule encoding and circumvents the need for intricate specification of symbolic planning domains Lin et al. (2023). Moreover, the intuitive nature of textual prompts allows for seamless interactions between LLM-based planners and human instructors, facilitating the integration of human expertise into the planning process. However, the efficacy and reliability of such LLM-based planners are often not satisfying due to the inherent design and training methodologies of LLMs. LLMs are essentially engineered to generate word sequences that align with human-like context, yet the assurance of their planning capabilities is not guaranteed Brown et al. (2020). Recent investigations have revealed instances where the correctness of generated actions and the success rate of task accomplishment by LLM-based planners fall short of expectations Valmeekam et al. (2022). This limitation becomes further pronounced in long-horizon sequential task planning, where complex action dependencies and extended temporal considerations introduce additional difficulties that challenge the planning abilities of LLMs. In this work, we aim to enhance the performance of LLM in long-horizon sequential task planning. Drawing inspiration from recent research that reveals the potential for LLM improvements through self-refinement Madaan et al. (2023); Huang et al. (2022b), we propose the Iterative Self-Refined LLM (ISR-LLM) framework that utilizes the power of iterative self-refinement to improve planning outcomes. Our framework consists of three steps (see Fig. 1): (1) Preprocessing, where an LLM translator is employed to translate the natural language inputs into their respective Planning Domain Definition Language (PDDL) Haslum et al. (2019) formulations; (2) Planning, where an LLM planner takes the translated PDDL problem as input and determines the action sequence to accomplish the long-horizon sequential task planning; (3) Iterative self-refinement, where a validator is used to examine the correctness of the generated action plan and provide feedback to the LLM planner. Then based on the feedback, the LLM planner performs the iterative self-refinement process to find a revised action plan. We consider two different types of validators in our approach: an LLM-based self-validator and an external validator that leverages auxiliary verification tools.
2308.13724#2
2308.13724#4
2308.13724
[ "2211.09935" ]
2308.13724#4
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
Through comprehensive experiments across diverse planning problem domains, we show that, com- pared to state-of-the-art approaches, ISR-LLM achieves better feasibility and success rate in long- horizon sequential task planning. The contributions of this work are threefold: 2 Objective Tasks Preprocessing with LLM Translator Planning with LLM Planner Self-Refinement â _ Robotics System (@icookng) (Gieaivonn & rew-shot @ )) {ActionPin) â ivesor) | mem | GEE f 9S rer gp Paat LarceLagiee ) | ( Mens ° Domain File c Model PDDL x PDDL Standardized Problem File (BBlocksworld | Encoding Format Action ec byChain-of-thought | | Action N Validation yoEwor"t Performance t AX J | Analysis H New Plan Generation Feedback to Planner â _Error Detected Pre-execution Figure 1: Overview of the proposed ISR-LLM framework. It consists of three steps: preprocessing, planning, and iterative self-refinement.
2308.13724#3
2308.13724#5
2308.13724
[ "2211.09935" ]
2308.13724#5
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
â ¢ We present ISR-LLM, a novel framework achieved by integrating a self-refinement mecha- nism into LLM. This approach addresses long-horizon sequential task planning and offers remarkable advancements in both feasibility and correctness. â ¢ We introduce and evaluate the effectiveness of two types of validators, i.e., an LLM-based self-validator and an external validator, in providing feedback to the LLM planner for executing the iterative self-refinement process. â ¢ We highlight the superiority of our proposed framework in comparison to contemporary state-of-the-art methods, through an investigation of ISR-LLM across three diverse planning domains. # 2 Related Work # 2.1 Long-Horizon Sequential Task Planning Long-horizon sequential task planning aims to find an optimal action sequence capable of accom- plishing a specified task objective Helmert (2006). In recent robotic studies, PDDL or Answer Set Programming (ASP) Brewka et al. (2011) are often utilized as the language for representing the planning problems Jiang et al. (2019). A prevalent method employed to tackle these planning tasks is to utilize a search-based or sampling-based algorithm to find a viable plan Levine and Humphreys (2003); Segovia-Aguas et al. (2021); Cohen et al. (2010). This strategy has found successful ap- plications across diverse robotic domains, e.g., mobile robots Zhang et al. (2015), autonomous vehicles Ding et al. (2020), and robotic manipulators Garrett et al. (2020). However, these approaches rely on a predetermined symbolic and logical representation of the planning domain, which usually demands a high level of expert knowledge for formulation. Moreover, due to the inherent abundance of potential action options associated with long-horizon sequential task planning, search-based or sampling-based strategies may encounter impediments in such scenarios. Some approaches also use example plans to construct novel plans, which are often represented through a finite state ma- chine Levesque (2005); Winner (2008). However, finding a useful example plan may be challenging or even impossible within certain task scenarios. It is also worth mentioning that, another important category of robotic planning is Task and Motion Planning (TAMP) Garrett et al. (2021), which combines high-level task planning in discrete spaces and low-level robot motion planning in continuous space as a hierarchical planning framework.
2308.13724#4
2308.13724#6
2308.13724
[ "2211.09935" ]
2308.13724#6
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
In TAMP, the focus extends beyond mere task planning to encompass the executability of the determined actions, i.e., the actions must be executable by the robot with a viable motion trajectory that is subject to both robotic and environmental constraints Toussaint (2015); Driess et al. (2019). However, how to accurately ground actions generated by LLMs into feasible robot motions remains a challenging and ongoing area of research Ahn et al. (2022); Huang et al. (2022c). Therefore, in this work, we focus only on exploring the task planning capabilities of LLMs. # 2.2 Planning with LLM To overcome the limited generalizability of traditional task planners, researchers have started inves- tigating the possibility of utilizing LLMs as task-agnostic planners Sharma et al. (2021); Li et al. (2022); Zeng et al. (2022); Singh et al. (2023). A multitude of studies have delved into grounding the language commands generated by LLMs to executable robotic actions Ahn et al. (2022); Huang et al. (2022c); Ding et al. (2023); Lin et al. (2023). For instance, in Ahn et al. (2022), scores are assigned to potential actions through a value function, and the action with the highest likelihood of
2308.13724#5
2308.13724#7
2308.13724
[ "2211.09935" ]
2308.13724#7
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
3 success is selected. Similarly, Huang et al. (2022a) adopts prompt engineering to extract actions that are executable for the robots. In Huang et al. (2022c), environmental feedback is introduced to enable online adjustment of action plans that are infeasible for the robots. Although the focus of this work is not the grounding of actions, these studies illustrate the competencies of LLMs in addressing diverse robotic planning tasks. Besides grounding language instructions, recent studies have also sought to combine LLMs with PDDL as a means of elevating the performance of LLM-based planners Valmeekam et al. (2022); Silver et al. (2022, 2023); Liu et al. (2023). In Valmeekam et al. (2022), a Blocksworld Slaney and Thiébaux (2001) benchmark is proposed to assess the LLMâ s capability in handling natural language inputs for planning. However, the results reveal a discouraging performance of LLMs in long-horizon task planning, even within seemingly uncomplicated tasks. In Silver et al. (2022, 2023), instead of natural language inputs, planning problems in PDDL syntax are directly presented to LLMs for generating action sequences. While this strategy contributes to enhanced performance, it inevitably diminishes the LLMâ s generalizability and often demands additional effort and expert knowledge for composing the corresponding PDDL files. In Liu et al. (2023), LLM is employed not as a planner, but rather as a translator that converts natural language inputs into PDDL problems, which are subsequently solved using classical PDDL planners. However, such an approach requires an external solver, potentially impeding the wider applicability of LLMs as task-agnostic planners. An analogous notion akin to our self-refinement concept is introduced in Raman et al. (2022). After the generation of an action plan based on natural language inputs, it collects the error information returned from the execution of the plan. This information is then constructed as re-prompts that direct the LLM towards correcting the erroneous actions. However, such a refinement process occurs subsequent to the action execution phase. Our approach, in comparison, not only considers the utilization of an external validator to perform a similar self-refinement process, but also investigates the potential of LLMs for enabling pre-execution action corrections through self-validation capabilities.
2308.13724#6
2308.13724#8
2308.13724
[ "2211.09935" ]
2308.13724#8
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
# 3 Preliminary # 3.1 Task Planning In this work, we consider the problem of task planning in a setting with discrete and fully observable states, finite actions, and deterministic transitions. Such a problem P is often represented by a tuple P = â ¨S, A, T, sinit, Gâ ©. For each state s â S within the discrete set of states S, an action a â A can be selected from the set of applicable actions A(s) â A, i.e., the preconditions of the action a must be fulfilled. The transition function T : S Ã A â S determines the next state based on the current state and the selected action. sinit â S represents the initial state and G â S is a set of goal states. A solution to the planning problem P is a sequential action plan Ï = (a1, a2, . . . , an) that controls the initial state sinit to a goal state, i.e., we have si+1 = T (si, ai) satisfied for all 0 â ¤ i â ¤ n and sn+1 â G. For long-horizon sequential task planning, the number of actions n tends to be relatively large. In this work, we focus on investigating the capabilities of LLM in solving the designated task planning problem P . Thus, our primary focus is the feasibility and success rate of planning rather than its optimality. # 3.2 PDDL PDDL is a standardized encoding format designed for classical planning problems Aeronautiques et al. (1998); Fox and Long (2003). A planning problem P represented in PDDL syntax consists of two files: a domain file and a problem file. The domain file embodies the foundational rules of the planning domain. It not only defines the predicates that elucidate the configuration of the state space S, but also formulates the preconditions and effects of all possible actions a â A, i.e., the transition function T . The problem file is used to define the available objects within the planning domain, as well as the initial state and goal conditions. Concrete examples of PDDL domain and problem files for the experiments considered in this work can be found in Appendix A.1. In this work, we assume that the natural language input provided to the LLM should include both the initial state and the goal conditions, such that the LLM translator is able to convert it into corresponding PDDL files.
2308.13724#7
2308.13724#9
2308.13724
[ "2211.09935" ]
2308.13724#9
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
For more details about PDDL, we direct the interested readers to Haslum et al. (2019). 4 # 4 ISR-LLM In this section, we introduce ISR-LLM, a novel framework that utilizes iterative self-refinement to find an action plan with improved accuracy and feasibility. It includes three steps: preprocessing with an LLM translator, planning with an LLM planner, and iterative self-refinement loop with a validator that is selected from either an LLM-based self-validator or an external validator. Details are explained as follows. # 4.1 Preprocessing with LLM Translator As illustrated in Fig. 1, the LLM translator first converts the given natural language instructions into a PDDL formulation, specifically representing them using the domain and problem files. The rationale for employing such a translator is grounded in its notable advantages, even though an LLM planner could be designed to operate directly on natural language inputs, as demonstrated in Lin et al. (2023). The adoption of a formal representation, i.e., PDDL, offers twofold benefits to the subsequent validation process of the generated plan. Firstly, it enables the usage of existing PDDL validators as the external validator, e.g., VAL Howey et al. (2004) or PDDL.lj Zhi-Xuan (2022). This obviates the necessity of developing a custom validator and thereby saves substantial time and effort. Secondly, rather than relying solely on language cues, this approach enables the LLM-based self-validator to acquire a comprehension akin to a state-machine understanding of the system state. This, in turn, facilitates a more precise evaluation of the correctness of the selected actions. In order to ensure the structural accuracy of the translated PDDL files, we adopt a technique known as few-shot in-context learning Brown et al. (2020). This technique involves embedding illustrative examples within the prompt, effectively instructing the LLM on how to formulate responses to given queries in a desired manner. Similar to Liu et al. (2023), we assume that the domain-specific knowledge pertinent to each considered planning task is available in advance, and thus include it within the few-shot examples provided to the LLM translator.
2308.13724#8
2308.13724#10
2308.13724
[ "2211.09935" ]
2308.13724#10
ISR-LLM: Iterative Self-Refined Large Language Model for Long-Horizon Sequential Task Planning
An example of the prompt presented to the LLM translator for the Blocksworld planning domain (see Sec. 5.1 for a detailed explanation about this domain) is shown in Fig. 2, and a complete list of all employed few-shot examples within this work is given in Appendix A.1. # 4.2 Planning with LLM Planner Once the natural language input is translated, the LLM planner takes these PDDL files as inputs and determines an action sequence aimed at achieving the given task (see Fig. 1). In addition to few-shot in-context learning, we also integrate the Chain-of-Thought (CoT) technique Wei et al. (2022) into the prompts provided to the LLM planner. CoT operates by decomposing the overall problem into intermediate steps, thus enabling the LLM to tackle complex reasoning problems that may not be solvable via standard prompting methods. An illustrative example of the prompt presented to the LLM planner is given in Fig. 2, and a comprehensive list of all the employed few-shot examples is accessible in Appendix A.2. Within this step, we obtain an initial action plan for addressing the given planning problem. Subse- quently, as detailed in the next subsection, such an initial plan is examined by a validator. Utilizing the feedback received from the validator, the LLM planner performs a self-refinement to find a new plan that attempts to correct erroneous actions. # Iterative Self-Refinement Loop with Validator The central component of the iterative self-refinement loop is the validator, as demonstrated in Fig. 1. Through the examination of the generated action sequence, the validator constructs feedback, pinpointing any actions considered incorrect, and subsequently conveys this information to the LLM planner. Then based on the feedback, the LLM planner initiates a self-refinement process to rectify the incorrect action and devise a new action plan. Note that, while the generated action sequence may contain multiple errors, analyzing actions subsequent to the initial error is often unnecessary, since the first error could potentially render the foundation of all ensuing actions fundamentally flawed. Thus, the self-refinement process is executed iteratively within a loop, where in each step, the validator stops at the first identified error. The information concerning this error is then returned, ensuring that each iterative stage is solely focused on rectifying this detected mistake. The iterative
2308.13724#9
2308.13724#11
2308.13724
[ "2211.09935" ]