doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2309.00071
42
12 [31] B. Rozière, J. Gehring, F. Gloeckle, S. Sootla, I. Gat, X. E. Tan, Y. Adi, J. Liu, T. Remez, J. Rapin, A. Kozhevnikov, I. Evtimov, J. Bitton, M. Bhatt, C. C. Ferrer, A. Grattafiori, W. Xiong, A. Défossez, J. Copet, F. Azhar, H. Touvron, L. Martin, N. Usunier, T. Scialom, and G. Synnaeve. Code Llama: Open foundation models for code, 2023. arXiv: 2308.12950. [32] P. Shaw, J. Uszkoreit, and A. Vaswani. Self-attention with relative position representations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 2 (Short Papers), pages 464–468, New Orleans, Louisiana, June 2018. Association for Computational Linguistics. [33] J. Su. Rectified rotary position embeddings. https://github.com/bojone/rerope, 2023.
2309.00071#42
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
43
(2) One-Turn Recommendation. Following the settings of traditional conversational recommender systems on Re- Dial (Li et al. 2018), we also adopt the one-turn recommen- dation strategy. Given a user’s history, we design a prompt that enables GPT-4 to generate a dialogue, thereby emulat- ing the interaction between a user and a recommendation agent. The objective is to ascertain whether the recommen- dation agent can accurately suggest the ground truth item in its next response. We assess both the item retrieval task (retrieval from the entire space) and the ranking task (rank- ing of provided candidates). Specifically, the dialogue con- text is presented to the recommendation agent, accompanied by the instruction Please give me k recommendations based on the chat history for the retrieval task, and the instruction Please rank these candidate items based on the chat his- tory for the ranking task. To ensure a fair comparison with baseline LLMs, the One-Turn Recommendation evaluation protocol employs only the “session-wise” setting, and the long-term memory module in InteRecAgent remains deacti- vated. Dataset. To compare methods across different domains, we conduct experiments using three datasets: Steam4, 4https://github.com/kang205/SASRec
2308.16505#43
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
43
[33] J. Su. Rectified rotary position embeddings. https://github.com/bojone/rerope, 2023. [34] J. Su, Y. Lu, S. Pan, A. Murtadha, B. Wen, and Y. Liu. RoFormer: Enhanced transformer with rotary position embedding, 2022. arXiv: 2104.09864. [35] Y. Sun, L. Dong, B. Patra, S. Ma, S. Huang, A. Benhaim, V. Chaudhary, X. Song, and F. Wei. A length-extrapolatable transformer, 2022. arXiv: 2212.10554. [36] M. Tancik, P. P. Srinivasan, B. Mildenhall, S. Fridovich-Keil, N. Raghavan, U. Singhal, R. Ra- mamoorthi, J. T. Barron, and R. Ng. Fourier features let networks learn high frequency functions in low dimensional domains. In Proceedings of the 34th International Conference on Neural Information Processing Systems, NIPS’20, Red Hook, NY, USA, 2020. Curran Associates Inc. ISBN 9781713829546.
2309.00071#43
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
44
Dataset. To compare methods across different domains, we conduct experiments using three datasets: Steam4, 4https://github.com/kang205/SASRec MovieLens5 and Amazon Beauty6. Each dataset comprises user-item interaction history data and item metadata. We ap- ply the leave-one-out method to divide the interaction data into training, validation, and testing sets. The training of all utilized tools is performed on the training and validation sets. Due to budget constraints, we randomly sample 1000 and 500 instances from the testing set for user simulator and one-turn benchmarking respectively. For the lifelong simula- tor, due to the costly long conversation, we use 100 instances in evaluation. Baselines. As dialogue recommendation agents, we com- pare our methods with the following baselines: • Random: Sample k items uniformly from entire item set. • Popularity: Sample k items with item popularity as the weight. • LlaMA-2-7B-chat, LlaMA-2-13B-chat (Touvron et al. 2023b): The second version of the LlaMA model released by Meta.
2308.16505#44
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
45
• Vicuna-v1.5-7B, Vicuna-v1.5-13B (Chiang et al. 2023): Open-source models fine-tuned with user-shared data from the ShareGPT7 based on LlaMA-2 foundation mod- els. • Chat-Rec (Gao et al. 2023b): A recently proposed conver- sational recommendation agent utilizes a text-embedding tool (OpenAI text-embedding-ada-002) to retrieve candi- dates. It then processes the content with an LLM before responding to users. We denote the use of GPT-3.5 as the LLM in the second stage with ”Chat-Rec (3.5)” and the use of GPT-4 with ”Chat-Rec (4)”. • GPT-3.5, GPT-4 (OpenAI 2023): We access these LLMs from OpenAI by API service. The GPT-3.5 version in use is gpt-3.5-turbo-0613 and GPT-4 version is gpt-4-06138. For the LlaMA and Vicuna models, we employ the FastChat (Zheng et al. 2023) package to establish local APIs, ensuring their usage is consistent with GPT-3.5 and GPT-4.
2308.16505#45
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
45
[39] H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. C. Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M.-A. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E.
2309.00071#45
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
46
Metrics. Since both our method and baselines utilize LLMs to generate response, which exhibit state-of-the-art text generation capabilities, our experiments primarily com- pare recommendation performance of different methods. For the user simulator strategy, we employ two metrics: Hit@k and AT@k, representing the success of recommending the target item within k turns and the average turns (AT) re- quired for a successful recommendation, respectively. Un- successful recommendations within k rounds are recorded as k + 1 in calculating AT. In the one-turn strategy, we fo- cus on the Recall@k and NDCG@k metric for retrieval and ranking task, respectively. In Recall@k, the k represents the retrieval of k items, whereas in NDCG@k, the k denotes the number of candidates to be ranked. 5https://grouplens.org/datasets/movielens/10m 6http://jmcauley.ucsd.edu/data/amazon/links.html 7https://sharegpt.com/ 8https://platform.openai.com/docs/models/ Table 2: Performance comparisons with the user simulator strategy (session-wise). H@5 is an abbreviation for Hit@5.
2308.16505#46
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
47
Table 2: Performance comparisons with the user simulator strategy (session-wise). H@5 is an abbreviation for Hit@5. Steam MovieLens Beauty Methods LlaMA2-7B 0.36 LlaMA2-13B 0.39 0.38 Vicuna-7B 0.40 Vicuna-13B 4.76 4.56 4.70 4.60 0.50 0.53 0.51 0.54 4.71 4.52 4.70 4.56 0.03 0.05 0.03 0.07 5.91 5.87 5.90 5.85 Chat-Rec(3.5) 0.74 0.83 Chat-Rec(4) 0.69 GPT-3.5 0.78 GPT-4 3.63 3.42 3.68 3.34 0.76 0.82 0.75 0.79 3.78 3.62 3.75 3.70 0.39 0.40 0.13 0.15 4.89 4.80 5.68 5.59 Ours 0.87 2.86 0.85 3.15 0.54 3.99 H@5↑ AT@5↓ H@5↑ AT@5↓ H@5↑ AT@5↓
2308.16505#47
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
47
[40] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and I. Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. [41] R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi. HellaSwag: Can a machine really In Proceedings of the 57th Annual Meeting of the Association for finish your sentence? Computational Linguistics, 2019. [42] Y. Zhao, A. Gu, R. Varma, L. Luo, C.-C. Huang, M. Xu, L. Wright, H. Shojanazeri, M. Ott, S. Shleifer, A. Desmaison, C. Balioglu, B. Nguyen, G. Chauhan, Y. Hao, and S. Li. PyTorch FSDP: Experiences on scaling fully sharded data parallel, 2023. arXiv: 2304.11277. 13 # A Additional details on interpolation methods # A.1 Short notes on the deduction of "NTK-aware" interpolation
2309.00071#47
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
48
H@5↑ AT@5↓ H@5↑ AT@5↓ H@5↑ AT@5↓ Implementation Details. We employ GPT-4 as the brain of the InteRecAgent for user intent parsing and tool plan- ing. Regarding tools, we use SQL as information query tool, SQL and ItemCF (Linden, Smith, and York 2003) as hard condition and soft condition item retrieval tools, re- spectively, and SASRec (Kang and McAuley 2018) with- out position embedding as the ranking tool. SQL is imple- mented with SQLite integrated in pandasql9 and retrieval and ranking models are implemented with PyTorch. The framework of InteRecAgent is implement with Python and LangChain10. For dynamic demonstration selection, we em- ploy sentence-transformers11 to encode demonstrations into vectors and store them using ChromaDB12, which facilitates ANN search during runtime. Regarding hyperparameter set- tings, we set the number of dynamic demonstrations to 3, the maximum number of candidates for hard condition retrieval to 1000, and the threshold for soft condition retrieval cut to the top 5%.
2308.16505#48
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
48
13 # A Additional details on interpolation methods # A.1 Short notes on the deduction of "NTK-aware" interpolation In Section 3.1, we introduce a change of basis from b to b′ in the definition of "NTK-aware" interpolation method. Here is a short note on its mathematical deduction. Recall that our goal is to spread out the interpolation pressure across the hidden dimensions using a base-change instead of scaling the frequencies by a fixed factor s. The property we want to guarantee is that: The lowest frequency needs to be scaled as much as linear positional scaling and the highest frequency to stay constant. We introduce a new base b′ such that the last dimension matches the wavelength of linear interpolation with a scale factor s. Since the original RoPE method skips odd dimensions in order to concatenate both cos( 2πx λ ) components into a single embedding, the last dimension d ∈ D is |D| − 2. The new base b′ can be chosen so that b′ |D|−2 |D| = s · b |D|−2 |D| . (23) Solving for b′ yields b′ = b · s |D| |D|−2 . (24) # A.2 The impact of pre-softmax scaling of YaRN on perplexity
2309.00071#48
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
49
4.2 Evaluation with User Simulator Session-wise setting. Table 2 presents the results of eval- uations conducted using the user simulator strategy. Our method surpasses other LLMs in terms of both hit rate and average turns across the three datasets. These results sug- gest that our InteRecAgent is capable of delivering more ac- curate and efficient recommendations in conversations com- pared to general LLMs. Overall, LLMs with larger parame- ter sizes perform better. GPT-3.5 and GPT4, with parameter sizes exceeding 100B, significantly outperform LlaMA2 and Vicuna-v1.5 13B models from the same series almost always surpass 7B models, except for LlaMA2-7B and LlaMA2- 13B, which both perform extremely poorly on the Beauty dataset. Another interesting observation is the more significant improvement in relatively private domains, such as Amazon Beauty. In comparison to gaming and movie domains, the beauty product domain is more private, featuring a larger # 9https://github.com/yhat/pandasql/ 10https://www.langchain.com/ 11https://huggingface.co/sentence-transformers 12https://www.trychroma.com/
2308.16505#49
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
49
b′ = b · s |D| |D|−2 . (24) # A.2 The impact of pre-softmax scaling of YaRN on perplexity In Section 3.4, we mention the impact of the factor t inside the softmax computation of attention weights. Here we fix 896 16k-token documents from RedPajama [12]6, and calculate their perplexity scores with different scaling 1/ t. The result is in Figure 2. For comparison, recall that our recommended factor in this case (s = 8) is given by the following. 1 ‘E = 0.11In(s) + 1 © 1.208. (25) 6We choose RedPajama because it is the open-source dataset closest to the training dataset of LLaMA as far as we are aware of. 14 √ t on different token positions, we cut each 16k-token document To show the impact of the factor 1/ into chunks of 2048 tokens, and further plot the mean perplexity change comparing to t = 1 in percentages ppl(t) − ppl(t = 1) ppl(t = 1) (26) of each chunk. The plot is shown in Figure 3.
2309.00071#49
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
50
Table 3: Performance comparisons with the user simulator strategy(LONG-CHAT). ”+LT Mem.” means activating the long-term memory module in our InteRecAgent. The higher Hit@50 and the lower AT@50, the better performance. Steam MovieLens Beauty Methods H@50 AT@50 H@50 AT@50 H@50 AT@50 GPT-4 0.70 20.56 0.71 24.06 0.06 49.42 Ours 0.83 +LT Mem. 0.86 16.85 17.58 0.76 0.77 20.13 20.06 0.69 0.74 27.14 25.88 Table 4: Performance comparisons with the lifelong user simulator strategy(LONG-CONTEXT). ”+LT Mem.” means activating the long-term memory module in our InteRecA- gent. Steam MovieLens Beauty Methods GPT-4 0.74 3.05 0.82 3.03 0.09 5.71 Ours 0.76 +LT Mem. 0.79 2.92 2.70 0.83 0.83 3.29 2.84 0.38 0.51 4.58 3.99
2308.16505#50
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
50
ppl(t) − ppl(t = 1) ppl(t = 1) (26) of each chunk. The plot is shown in Figure 3. To further demonstrate the best values of t across all samples over different token positions, we plot t for each of the 8 position segments over the sample counts with minimal perplexity at a given 1/ the 16k-token range in Figure 4. We observe that: • for a suitable t, a sample may obtain better perplexity scores across the extended context window; the best value of t is mostly consistent across different samples and different positions. We remark that this finding is consistent for different values of s and the best value of t follows our recommended formula (Eq. 22) closely. # B Additional tables and charts # B.1 GovReport evaluations In Section 4.3.1, we mention the evaluation on GovReport documents. The evaluation results are detailed in Table 4 below. # B.2 Passkey Retrieval
2309.00071#50
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
51
number of items not well-covered by common world knowl- edge or being new. Table 2 reveals that GPT-3.5 and GPT-4 exhibit competitive performance in gaming and movie do- mains. However, in the Amazon Beauty domain, most LLMs suffer severe hallucination issue due to the professional, long, and complex item names, resulting in a significant drop in performance. This phenomenon highlights the necessity of recommender agents in private domains. Leveraging the text embedding retrieval tool, Chat-Rec shows superior per- formance compared to GPT-3.5 and GPT-4, but still falling short of the performance achieved by InteRecAgent. Chat- Rec can be seen as a simplified version of InteRecAgent, incorporating just a single tool within the agent’s frame- work. Consequently, Chat-Rec lacks the capability to handle multifaceted queries, such as procuring detailed information about an item or searching for items based on intricate crite- ria.
2308.16505#51
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
51
In Section 4.3.1, we mention the evaluation on GovReport documents. The evaluation results are detailed in Table 4 below. # B.2 Passkey Retrieval Here we can observe that the lowest perplexity point alone does not provide a comprehensive depiction on the "effective context size" that an LLM can attend to. While the Code Llama 13b model exhibits increasing perplexity above 100k context lengths, it was still able to accurately retrieve the passkey at a context length of 128k. This suggest that while the output of Code Llama might start to degrade in quality above 100k context size, it is still able to maintain strong retrieval capabilities. In addition, as YaRN with s = 32 was trained for 200 more steps than YaRN with s = 16 while having a higher passkey accuracy with similar perplexity, we hypothesize that perplexity may not be a great indicator of whether an LLM is able to attend to all tokens and does not exhaustively determine long context performance. This also suggests that the YaRN models with s = 16 might be relatively undertrained for the passkey retrieval task. # B.3 Dynamic scaling on models without any fine-tuning
2309.00071#51
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
52
Lifelong conversation setting. Table 3 and Table 4 demonstrate the performance of two lifelong mem- ory configurations, specifically, LONG-CHAT and LONG- CONTEXT. For LONG-CHAT, the recommender agent en- gages a maximum of 50 rounds of dialogue with the user simulator. In both configurations, InteRecAgent without long-term memory modules (denoted as “Ours” in the ta- bles) consistently outperforms GPT-4 across all datasets, which validates the robustness of our tool-enhanced rec- ommender agent framework. After activating the long-term memory modules, the performance gets further improved under both LONG-CHAT and LONG-CONTEXT configura- tions. This confirms the necessity and effectiveness of mem- ory on capturing user preference during lifelong interactions between the user and AI agent. Table 5: Performance comparisons in one-turn recommen- dation (%). R@5 and N@20 are abbreviations for Recall@5 and NDCG@20 respectively.
2308.16505#52
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
52
# B.3 Dynamic scaling on models without any fine-tuning We first recall from Section 3.3 that the Dynamic Scaling technique is an inference-time technique that dynamically update the factor s in interpolation methods such as PI, "NTK-by-parts" and YaRN. We choose the original Llama 2, fix a sample in GovReport and calculate its perplexity on a sliding window of 256 tokens using RoPE, Dynamic-PI and Dynamic-YaRN. Since the original maximal context length of Llama 2 is 4096, we observe that Dynamic Scaling effectively extend the inference length and Dynamic-YaRN achieves better performance than Dynamic-PI. The resulting chart is in Figure 5. We see that • Dynamic Scaling effectively prevents the blow-up of perplexity score beyond pretrained context window; • Dynamic-YaRN outperforms Dynamic-PI in terms of long-range perplexity on pretrained Llama-2 without any finetuning. 15 attention pre-softmax scaling vs final perplexity over 56 16k-token documents (s=8) 9 perplexity bo ow 2 N @ w 11 12 13 14 15 vt Figure 2: Fix s = 8, compare the LLaMA 7b perplexity on 896 16k-token documents over different scaling 1/
2309.00071#52
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
53
Task Retrieval(R@5↑) Ranking(N@20↑) Dataset Steam Movie Beauty Steam Movie Beauty Random Popularity 00.04 02.02 00.06 01.61 00.00 00.08 35.35 36.06 34.22 34.91 30.02 31.04 LlaMA2-7B 13.54 LlaMA2-13B 14.14 13.13 Vicuna-7B 18.18 Vicuna-13B 05.85 15.32 08.27 16.13 06.71 07.11 06.91 07.52 07.30 21.56 22.03 30.50 04.59 18.05 18.99 24.61 03.03 15.95 11.94 18.85 Chat-Rec(3.5) Chat-Rec(4) GPT-3.5 GPT-4 34.27 35.18 42.02 56.77 24.21 27.88 23.59 47.78 20.91 21.37 10.37 12.80 – – 44.37 57.29 – – 42.46 55.78 – – 31.90 33.28 Ours 65.05 52.02 30.28 60.28 63.86 40.05 Table 6: Performance of InteRecAgent with various LLMs as the brain, evaluated by the session-wise user simulator. (×10−1)
2308.16505#53
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
53
Figure 2: Fix s = 8, compare the LLaMA 7b perplexity on 896 16k-token documents over different scaling 1/ Perplexity change percentage (from 1/Vt=1 to higher) of various segments of tokens in 16k-token samples vo 0% a 6 £ c © -10% ry) £ o © 20% o a c © -30% | — 0- 2048 0 — 2048 - 4096 ray — 4096 - 6144 'X -40% | — 6144-8192 2 — 8192-10240 Q — 10240 - 12288 2 50% | —— 12288 - 14336 — 14336 - 16384 11 12 13 14 15 vt Figure 3: Fix s = 8, compare the mean of perplexity change percentages ppl(t) − ppl(t = 1) ppl(t = 1) √ at different segments of token positions on 896 16k-token documents over different scaling 1/ t.
2309.00071#53
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
54
Table 6: Performance of InteRecAgent with various LLMs as the brain, evaluated by the session-wise user simulator. (×10−1) Steam MovieLens Beauty Methods 0.00 LlaMA-2 T-LlaMA(O) 0.00 T-LlaMA(A) 0.05 5.92 Davinci-003 1.81 GPT-3.5 8.01 RecLlama 8.68 GPT-4 60.00 60.00 59.82 43.79 56.30 31.77 28.61 0.00 0.00 0.04 5.98 1.31 8.21 8.48 60.00 60.00 59.81 43.12 56.71 32.04 31.51 0.00 0.00 0.05 2.60 1.36 4.08 5.36 60.00 60.00 59.82 52.18 56.60 46.40 39.90
2308.16505#54
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
54
Model Size Model Name Context Extension Window Method Perplexity 7B 7B 7B YaRN (s = 16) 7B YaRN (s = 32) Together Code Llama 32k 100k 64k 128k PI NTK YaRN YaRN 3.67 4.44 3.59 3.64 13B 13B YaRN (s = 16) 13B YaRN (s = 32) Code Llama 100k 64k 128k NTK YaRN YaRN 4.22 3.35 3.39 Table 4: Sliding window perplexity (S = 256) of 50 long GovReport documents with a fixed context window size of 32k 16 The sample counts with minimal perplexity at a given 1/vt (different lines represent different position segments at the 16k-token samples) — 0-2048 —— 2048 - 4096 —— 4096 - 6144 — 6144-8192 — 8192-10240 — 10240 - 12288 —— 12288 - 14336 —— 14336 - 16384 500 100 1.10 Lis 1.20 125 1.30 135 1.40 It √ Figure 4: The sample counts (out of the 896 samples) with minimal perplexity at a given 1/ segment of token positions over the 16k-token range.
2309.00071#54
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
55
4.3 Evaluation with One-Turn Recommendation In this part, we evaluate both the retrieval and ranking recommendation tasks. For the Retrieval task, we set the recommendation budget k to 5 for all methods, with Re- call@5 being the evaluation metric. For the Ranking task, we randomly sample 19 negative items, and together with the one positive item, they form the candidate list proac- tively provided by users. The evaluation metric for this task is NDCG@20. For Chat-Rec, we omit the results of on the Ranking task because Chat-Rec degenerates into GPTs when removing the embedding-based candidate retrieval stage. The results are shown in Table 5. Based on the results, we can draw conclusions similar to those in Section 4.2. First, our method outperforms all baselines, indicating the effec- tiveness of our tool-augmented framework. Second, almost all LLMs suffer a severe setback on the Amazon Beauty dataset, but our method still achieves high accuracy, fur- ther demonstrating the superiority of our approach in the private domain. Notably, some LLMs underperform com- pared to random and popularity methods in ranking tasks, particularly in the Amazon dataset. This can be primarily at- tributed to LLMs not adhering to the ranking instructions, which arise due to LLMs’ uncertainty and produce out-of- scope items, especially for smaller LLMs.
2308.16505#55
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
55
√ Figure 4: The sample counts (out of the 896 samples) with minimal perplexity at a given 1/ segment of token positions over the 16k-token range. Model Size Model Name Extension Factor (s) Window Data Context Method Scaling Context Training 7B 7B Code Llama 7B 7B Together YaRN YaRN 4 88.6 16 32 32k 100k 64k 128k 32k 16k 64k 64k PI NTK YaRN YaRN 32k 112k 64k 128k 100% 94.3% 96.3% 99.4% 13B Code Llama 13B 13B 88.6 16 32 100k 64k 128k 16k 64k 64k NTK YaRN YaRN 128k 64k 128k YaRN YaRN # Passkey Passkey Context Accuracy 99.4% 97.5% 99.4% Table 5: Passkey retrieval performance of various models. The passkey context denotes the maximum tested context window size where the accuracy of passkey retrieval was >= 80%, and the passkey accuracy is the average accuracy of passkey retrieval on all context sizes tested that were smaller or equal than the passkey context size.
2309.00071#55
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
56
4.4 Comparions of Different LLMs as the Brain In previous experiments, we utilized GPT-4 as the LLM for the InteRecAgent framework. This section presents a comparative analysis of the performance when employing different LLMs within the InteRecAgent. Note that Re- cLlama is our finetuned 7B model introduced in Section 3.5. ToolLlaMA2-7B (Qin et al. 2023b) is another fine-tuned model designed to interact with external APIs in response to human instructions. Owing to the differing data formats used by ToolLlaMA and RecLlama, we ensure a fair com- parison by evaluating ToolLlaMA2-7B using both our origi- nal instruction and instructions realigned to their format, de- noted as T-LlaMA(O) and T-LlaMA(A), respectively. The outcomes are tabulated in Table 6.
2308.16505#56
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
56
The comparison between RoPE, Dynamic-PI and Dynamic-YaRN on Llama-2 7b — RoPE —— Dynamic-PI 550 —— Dynamic-YaRN 5.25 B 5.00 F4 a 4.75 o a 4.50 4.25 4.00 0 1000 2000 3000 4000 5000 6000 7000 8000 Context length (# of tokens) Figure 5: The comparison between RoPE, Dynamic-PI and Dynamic-YaRN using Llama 2 on a long GovReport sample. This model has not been finetuned for long context. 17 3.4 — Yarn-Mistral-7b-64k — Yarn-Mistral-7b-128k 3.2 —— amazon/MistralLite — nistralai/Mistral-7B-v0.1 s 23.0 o a o 928 2 2 2.6 a o & 24 2.2 ) 20000 40000 60000 80000 100000 120000 Context Window Figure 6: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size # B.4 Mistral
2309.00071#56
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
57
Surprisingly, both LlaMA-2-7B and ToolLlaMA-2-7B fall short in generating structured plans. Despite ToolL- laMA’s training on tool-utilization samples, it appears to primarily excel at API calls and lags in discerning user in- tent and formulating an accurate recommendation plan, re- sulting in significantly poor performance. Another intrigu- ing finding is that GPT-3.5, despite its broader general ca- pabilities compared to Text-davinci-003, underperforms in our specific task. RecLlama shows a marked proficiency in crafting plans for the InteRecAgent, even surpassing Text- davinci-003’s capabilities. Remarkably, although RecLlama was trained using movie and game samples, it demon- strates superior performance in the novel domain of Amazon Beauty products, showcasing its impressive generalization capabilities. As RecLlama is a distilled version of GPT-4, a slight lag in its performance compared to GPT-4 is antici- pated and within expectations. —e— SteamGames -=— MovieLens -*~ Amazon Beauty 1! 4.0 ee el 0.5, es # Ours # wio P # wioD # wioR # Ours # w/oP # wfoD # wi/oR
2308.16505#57
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
57
Figure 6: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size # B.4 Mistral We additionally extended the Mistral 7B v0.1 model [20], which broadly follows the Llama architec- ture. For Mistral we trained a 64k context window model (s = 8) for 1000 steps using 16k sequence lengths with a constant learning rate of 1 × 10−6. The model’s sliding window attention size was set to the context window size, effectively disabling sliding window attention. We then trained for an additional 500 steps at s = 16 to arrive at a 128k context window model. The training data was a mix of the pre-train and fine-tune splits of Together Computer’s Long-Data Collections [3]. We evaluated the models following the same procedure as described in 4.3.1, comparing against the base v0.1 model and MistralLite [1], an NTK-aware (θ = 1M) version of v0.1. The results (Figure 6 and Table 6) were consistent with those of the Llama family of models.
2309.00071#57
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
58
# Ours # wio P # wioD # wioR # Ours # w/oP # wfoD # wi/oR Figure 5: Ablation study under user simulator evaluation. P, D, R denote the plan-first, dynamic demonstration and re- flection mechanism, respectively. Note that dynamic demon- stration is also used in w/o P. 4.5 Ablation Study This paper introduces several key mechanisms to enhance LLM’s ability to better utilize tools. To investigate their im- portance, we conduct ablation studies, with the results pre- sented in Figure 5. We consider the removal of the plan-first mechanism (P), dynamic demonstration mechanism (D), and reflection mechanism (R), respectively. Experiments are carried out using the user simulator setting, as it provides a more comprehensive evaluation, encompassing both accu- racy (hit rate) and efficiency (average turn) metrics. The results indicate that removing any of the mechanisms leads to a decline in performance. Among these mecha- nisms, the removal of the reflection mechanism has the most
2308.16505#58
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2309.00071
58
Model Size Model Name Mistral v0.1 7B MistralLite 7B YaRN (s = 8) 7B 7B YaRN (s = 16) Context Extension Window Method 8k 16k 64k 128k - NTK YaRN YaRN Evaluation Context Window Size 4096 3.09 3.26 3.18 3.21 8192 2.96 3.13 3.04 3.08 16384 65536 36.8 > 103 47.3 > 103 2.20 2.65 2.24 2.68 131072 > 103 > 103 57.4 2.19 Table 6: Sliding window perplexity (S = 256) of ten 128k Proof-pile documents truncated to evaluation context window size 18
2309.00071#58
YaRN: Efficient Context Window Extension of Large Language Models
Rotary Position Embeddings (RoPE) have been shown to effectively encode positional information in transformer-based language models. However, these models fail to generalize past the sequence length they were trained on. We present YaRN (Yet another RoPE extensioN method), a compute-efficient method to extend the context window of such models, requiring 10x less tokens and 2.5x less training steps than previous methods. Using YaRN, we show that LLaMA models can effectively utilize and extrapolate to context lengths much longer than their original pre-training would allow, while also surpassing previous the state-of-the-art at context window extension. In addition, we demonstrate that YaRN exhibits the capability to extrapolate beyond the limited context of a fine-tuning dataset. The models fine-tuned using YaRN has been made available and reproduced online up to 128k context length at https://github.com/jquesnelle/yarn
http://arxiv.org/pdf/2309.00071
Bowen Peng, Jeffrey Quesnelle, Honglu Fan, Enrico Shippole
cs.CL, cs.AI, cs.LG
null
null
cs.CL
20230831
20231101
[]
2308.16505
59
UR RETAIN Aasltent ll ar an beauty product asitant designed to help you ied beauty products onthe Amazon platform. How may asst you today? User Wha'sthe boss of Amazon? GPTH_Assstent Asaf my ast update, the CEO of Amazon is Andre Jasy He took over hero on July 5, 2021, succeeding the company’s founder, Jeff Bezos. User How many beauty products do you know? (THT CTCL RUIN ENCES (sy a Soe (a) Assistent el! ama conversational beauty product recommendation assistant in Amazon platform hereto help you find beauyproduts that you're terete in Lets chat about your preferences and irecormend some predtetsforyou User: Who's the boss of Amazon? Assistet: The CEO of Amazons Andrew Jssy. He suceeded Jeff ezos in uly 202, However, my main focusison helping you find beauty products that youre tree User iow sy beauty product do you know? Aaslstet row abou G78 beeuy product etme know your preferences ae can elo you fod the pict peel User (Target: RimWorld) {have played Torchlight Il, Endless Legend, Factorio, Undertale in the past,
2308.16505#59
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
60
ae can elo you fod the pict peel User (Target: RimWorld) {have played Torchlight Il, Endless Legend, Factorio, Undertale in the past, now I want some single-player 20 strategy games released after 2015. But| have a limited budget of about $30. PTH Ihave found afew single-player 20 strategy games that were released after 2015 and ft within your budget of $30. Here are some options: + Into the Breach ($14.98) (b) + Northgar ($29.98) not 220 game) + Kingdom: Two Crowns ($18.95) + Wargroove $19.99) Ours | recommend the following single-player 20 strategy games released after 2015 and within your budget of $30, * Clicker Heroes (2015-05-13): $0.00 ‘+ Darkest Dungeon (2016-01-19): $24.99 ‘+ HuniePop (2015-01-19): $3.99, ‘ RimWorld (2016-07-15): $29.99 ++ AdVenture Capitalist (2015-03-30): $0.00 User (Target: Get the infomation that there is no MA.C prefumes in Amazon) | heard M.A.Cis popular now, Please recommend me some
2308.16505#60
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
61
User (Target: Get the infomation that there is no MA.C prefumes in Amazon) | heard M.A.Cis popular now, Please recommend me some M.A.C prefumes. GPTU Here are some MAC perfumes you can find on Amazon: -LMAAC Shadescents Eau de Parfum -Creme D'Nude: This perfume has a warm, musky scent with notes of amber, () ‘caramel, and vanilla. tis perfect for those who love a sensual and inviting fragrance. io such product in Amazon) 2.MAC Shadescents Eau de Parfum - Velvet Teddy... (No such product in Amazon) BMC Shadescents Eau de Parfum -Candy Yum Yur (No such product in Amazon) ‘urs I'm sorry, but! couldn't find any MAC perfumes in our database. Please try another brand or product category
2308.16505#61
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
62
Figure 6: Case Study in (a) chit-chat, (b) Steam game do- main and (c) Amazon Beauty e-commerce product domain. significant impact on performance, as it can correct tool in- put format errors and tool misuse. Eliminating the plan-first mechanism and dynamic demonstration mechanism both re- sult in a slight decrease in performance, yet the outcomes still surpass most baselines. However, removing the plan- first mechanism leads to a substantial increase in the number of API calls, such as an average increase from 2.78 to 4.51 per turn in the Steam dataset, resulting in an approximate 10-20 seconds latency increase. # 4.6 Case Study To effectively visualize InteRecAgent’s performance, we present case studies in chit-chat and two domains: gaming and beauty products, as shown in Figure 6. We compare the outputs of GPT-4 and InteRecAgent for given user inputs. In chit-chat scenario (Figure 6a), InteRecAgent retains the capabilities of GPT-4 while also possessing the added ability to query domain-specific data (such as the number of prod- ucts), yielding more accurate information.
2308.16505#62
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
63
In the game domain (Figure 6b), user input conditions are complex, encompassing user history and various de- mands. GPT-4’s recommendations mostly align with condi- tions, except for a 3D game Northgard misidentified as 2D. InteRecAgent’s response adheres to user conditions, and no- tably, includes the subsequent game in the user’s historical sequence, RimWorld, owing to its superior ranking perfor- mance. In the e-commerce domain (Figure 6c), GPT-4’s hallu- cination phenomenon intensifies, resulting in giving prod- ucts not existing in Amazon platform. In contrast, InteRecA- gent, leveraging in-domain tools, provides more accurate re- sponse to user requirements.
2308.16505#63
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
64
5 Conclusion In this paper, we introduce InteRecAgent, a compact frame- work that transforms traditional recommender models into interactive systems by harnessing the power of LLMs. We identify a diverse set of fundamental tools, categorized into information query tools, retrieval tools, and ranking tools, which are dynamically interconnected to accomplish com- plex user inquiries within a task execution framework. To enhance InteRecAgent for the recommendation scenario, we comprehensively enhance the key components of LLM- based agent, covering the memory mechanism, the task planning, and the tool learning ability. Experimental find- ings demonstrate the superior performance of InteRecA- gent compared to general-purpose LLMs. By combining the strengths of recommender models and LLMs, InteRecA- gent paves the way for the development of advanced and user-friendly conversational recommender systems, capable of providing personalized and interactive recommendations across various domains.
2308.16505#64
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
65
References Besta, M.; Blach, N.; Kubicek, A.; Gerstenberger, R.; Gianinazzi, L.; Gajda, J.; Lehmann, T.; Podstawski, M.; Niewiadomski, H.; Nyczyk, P.; et al. 2023. Graph of thoughts: Solving elaborate problems with large language models. arXiv preprint arXiv:2308.09687. Brown, T.; Mann, B.; Ryder, N.; Subbiah, M.; Kaplan, J. D.; Dhariwal, P.; Neelakantan, A.; Shyam, P.; Sastry, G.; Askell, A.; et al. 2020. Language models are few-shot learners. Ad- vances in neural information processing systems, 33: 1877– 1901. Chen, Q.; Lin, J.; Zhang, Y.; Ding, M.; Cen, Y.; Yang, H.; and Tang, J. 2019. Towards knowledge-based recommender dialog system. arXiv preprint arXiv:1908.05391. Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2022. Program of thoughts prompting: Disentangling computa-
2308.16505#65
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
66
Chen, W.; Ma, X.; Wang, X.; and Cohen, W. W. 2022. Program of thoughts prompting: Disentangling computa- tion from reasoning for numerical reasoning tasks. arXiv preprint arXiv:2211.12588. Chiang, W.-L.; Li, Z.; Lin, Z.; Sheng, Y.; Wu, Z.; Zhang, H.; Zheng, L.; Zhuang, S.; Zhuang, Y.; Gonzalez, J. E.; Stoica, I.; and Xing, E. P. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. Chowdhery, A.; Narang, S.; Devlin, J.; Bosma, M.; Mishra, G.; Roberts, A.; Barham, P.; Chung, H. W.; Sutton, C.; Gehrmann, S.; et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Christakopoulou, K.; Radlinski, F.; and Hofmann, K. 2016. Towards conversational recommender systems. In Proceed- ings of the 22nd
2308.16505#66
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
67
K.; Radlinski, F.; and Hofmann, K. 2016. Towards conversational recommender systems. In Proceed- ings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, 815–824. Dai, S.; Shao, N.; Zhao, H.; Yu, W.; Si, Z.; Xu, C.; Sun, Z.; Zhang, X.; and Xu, J. 2023. Uncovering ChatGPT’s arXiv preprint Capabilities in Recommender Systems. arXiv:2305.02182. Gao, C.; Lei, W.; He, X.; de Rijke, M.; and Chua, T.-S. 2021. Advances and challenges in conversational recom- mender systems: A survey. AI Open, 2: 100–126. Gao, L.; Madaan, A.; Zhou, S.; Alon, U.; Liu, P.; Yang, Y.; Callan, J.; and Neubig, G. 2023a. Pal: Program-aided language models. In International Conference on Machine Learning, 10764–10799. PMLR. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and
2308.16505#67
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
68
Machine Learning, 10764–10799. PMLR. Gao, Y.; Sheng, T.; Xiang, Y.; Xiong, Y.; Wang, H.; and Zhang, J. 2023b. Chat-rec: Towards interactive and explain- able llms-augmented recommender system. arXiv preprint arXiv:2303.14524. Kang, W.-C.; and McAuley, J. 2018. Self-attentive sequen- In 2018 IEEE international confer- tial recommendation. ence on data mining (ICDM), 197–206. IEEE. Kang, W.-C.; Ni, J.; Mehta, N.; Sathiamoorthy, M.; Hong, L.; Chi, E.; and Cheng, D. Z. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Predic- tion. arXiv preprint arXiv:2305.06474. Kojima, T.; Gu, S. S.; Reid, M.; Matsuo, Y.; and Iwasawa, Y. 2022. Large language models are zero-shot reason- ers. Advances in neural information processing systems, 35: 22199–22213.
2308.16505#68
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
69
Li, R.; Ebrahimi Kahou, S.; Schulz, H.; Michalski, V.; Char- lin, L.; and Pal, C. 2018. Towards deep conversational rec- ommendations. Advances in neural information processing systems, 31. Linden, G.; Smith, B.; and York, J. 2003. Amazon. com rec- IEEE ommendations: Item-to-item collaborative filtering. Internet computing, 7(1): 76–80. Liu, J.; Liu, C.; Lv, R.; Zhou, K.; and Zhang, Y. 2023a. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149. Liu, J.; Shen, D.; Zhang, Y.; Dolan, B.; Carin, L.; and Chen, W. 2021. What Makes Good In-Context Examples for GPT- 3? arXiv preprint arXiv:2101.06804. Liu, L.; Yang, X.; Shen, Y.; Hu, B.; Zhang, Z.; Gu, J.; and Zhang, G. 2023b. Think-in-memory: Recalling and post- thinking enable llms with long-term memory. arXiv preprint
2308.16505#69
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
70
Gu, J.; and Zhang, G. 2023b. Think-in-memory: Recalling and post- thinking enable llms with long-term memory. arXiv preprint arXiv:2311.08719. Madaan, A.; Tandon, N.; Gupta, P.; Hallinan, S.; Gao, L.; Wiegreffe, S.; Alon, U.; Dziri, N.; Prabhumoye, S.; Yang, Y.; et al. 2023. Self-refine: Iterative refinement with self- feedback. arXiv preprint arXiv:2303.17651. Mirzadeh, N.; Ricci, F.; and Bansal, M. 2005. Feature se- lection methods for conversational recommender systems. In 2005 IEEE International Conference on e-Technology, e- Commerce and e-Service, 772–777. IEEE. Nakano, R.; Hilton, J.; Balaji, S.; Wu, J.; Ouyang, L.; Kim, C.; Hesse, C.; Jain, S.; Kosaraju, V.; Saunders, W.; et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint
2308.16505#70
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
71
V.; Saunders, W.; et al. 2021. Webgpt: Browser-assisted question-answering with human feedback. arXiv preprint arXiv:2112.09332. OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774. Ouyang, L.; Wu, J.; Jiang, X.; Almeida, D.; Wainwright, C.; Mishkin, P.; Zhang, C.; Agarwal, S.; Slama, K.; Ray, A.; et al. 2022. Training language models to follow instructions with human feedback. Advances in Neural Information Pro- cessing Systems, 35: 27730–27744. Penha, G.; and Hauff, C. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceedings of the 14th ACM Confer- ence on Recommender Systems, 388–397. Qin, Y.; Hu, S.; Lin, Y.; Chen, W.; Ding, N.; Cui, G.; Zeng, Z.; Huang, Y.; Xiao, C.; Han, C.; et al. 2023a. Tool learning with foundation models. arXiv preprint arXiv:2304.08354.
2308.16505#71
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
72
Y.; Xiao, C.; Han, C.; et al. 2023a. Tool learning with foundation models. arXiv preprint arXiv:2304.08354. Qin, Y.; Liang, S.; Ye, Y.; Zhu, K.; Yan, L.; Lu, Y.; Lin, Y.; Cong, X.; Tang, X.; Qian, B.; et al. 2023b. Toolllm: Facilitating large language models to master 16000+ real- world apis. arXiv preprint arXiv:2307.16789. Rubin, O.; Herzig, J.; and Berant, J. 2021. Learning to arXiv preprint retrieve prompts for in-context learning. arXiv:2112.08633. Schick, T.; Dwivedi-Yu, J.; Dess`ı, R.; Raileanu, R.; Lomeli, M.; Zettlemoyer, L.; Cancedda, N.; and Scialom, T. 2023. Toolformer: Language models can teach themselves to use tools. arXiv preprint arXiv:2302.04761. Shen, Y.; Song, K.; Tan, X.; Li, D.; Lu, W.; and Zhuang, Y. 2023. Hugginggpt: Solving ai
2308.16505#72
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
74
Shinn, N.; Cassano, F.; Labash, B.; Gopinath, A.; Narasimhan, K.; and Yao, S. 2023. Reflexion: Language agents with verbal reinforcement learning. arXiv preprint arXiv:2303.11366. Shuster, K.; Xu, J.; Komeili, M.; Ju, D.; Smith, E. M.; Roller, S.; Ung, M.; Chen, M.; Arora, K.; Lane, J.; et al. 2022. Blenderbot 3: a deployed conversational agent that continually learns to responsibly engage. arXiv preprint arXiv:2208.03188. Thoppilan, R.; De Freitas, D.; Hall, J.; Shazeer, N.; Kul- shreshtha, A.; Cheng, H.-T.; Jin, A.; Bos, T.; Baker, L.; Du, Y.; et al. 2022. Lamda: Language models for dialog appli- cations. arXiv preprint arXiv:2201.08239. Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.;
2308.16505#74
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
75
Touvron, H.; Lavril, T.; Izacard, G.; Martinet, X.; Lachaux, M.-A.; Lacroix, T.; Rozi`ere, B.; Goyal, N.; Hambro, E.; Azhar, F.; et al. 2023a. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971. Touvron, H.; Martin, L.; Stone, K.; Albert, P.; Almahairi, A.; Babaei, Y.; Bashlykov, N.; Batra, S.; Bhargava, P.; Bhosale, S.; et al. 2023b. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288. Wang, L.; Hu, H.; Sha, L.; Xu, C.; Wong, K.-F.; and Jiang, D. 2021. Recindial: A unified framework for conversational recommendation with pretrained language models. arXiv preprint arXiv:2110.07477. Wang, L.; and Lim, E.-P. 2023. Zero-Shot Next-Item Rec- ommendation using Large Pretrained Language Models.
2308.16505#75
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
76
Wang, L.; and Lim, E.-P. 2023. Zero-Shot Next-Item Rec- ommendation using Large Pretrained Language Models. arXiv preprint arXiv:2304.03153. Wang, L.; Ma, C.; Feng, X.; Zhang, Z.; Yang, H.; Zhang, J.; Chen, Z.; Tang, J.; Chen, X.; Lin, Y.; et al. 2023a. A survey on large language model based autonomous agents. arXiv preprint arXiv:2308.11432. Wang, L.; Xu, W.; Lan, Y.; Hu, Z.; Lan, Y.; Lee, R. K.-W.; and Lim, E.-P. 2023b. Plan-and-solve prompting: Improv- ing zero-shot chain-of-thought reasoning by large language models. arXiv preprint arXiv:2305.04091. Wang, T.-C.; Su, S.-Y.; and Chen, Y.-N. 2022. BARCOR: Towards A Unified Framework for Conversational Recom- mendation Systems. arXiv preprint arXiv:2203.14257. Wang, W.; Dong, L.; Cheng, H.; Liu,
2308.16505#76
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
77
mendation Systems. arXiv preprint arXiv:2203.14257. Wang, W.; Dong, L.; Cheng, H.; Liu, X.; Yan, X.; Gao, J.; and Wei, F. 2023c. Augmenting Language Models with Long-Term Memory. arXiv preprint arXiv:2306.07174. Wang, X.; Wei, J.; Schuurmans, D.; Le, Q.; Chi, E.; Narang, S.; Chowdhery, A.; and Zhou, D. 2022a. Self-consistency improves chain of thought reasoning in language models. arXiv preprint arXiv:2203.11171. Wang, X.; Zhou, K.; Wen, J.-R.; and Zhao, W. X. 2022b. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 1929–1937. Wei, J.; Wang, X.; Schuurmans, D.; Bosma, M.; Xia, F.; Chi, E.; Le, Q. V.; Zhou, D.; et al. 2022.
2308.16505#77
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
79
Wu, C.; Yin, S.; Qi, W.; Wang, X.; Tang, Z.; and Duan, N. 2023. Visual chatgpt: Talking, drawing and editing with vi- sual foundation models. arXiv preprint arXiv:2303.04671. Xie, Z.; Yu, T.; Zhao, C.; and Li, S. 2021. Comparison-based conversational recommender system with relative bandit In Proceedings of the 44th International ACM feedback. SIGIR Conference on Research and Development in Infor- mation Retrieval, 1400–1409. Xu, K.; Yang, J.; Xu, J.; Gao, S.; Guo, J.; and Wen, J.-R. 2021. Adapting user preference to online feedback in multi- In Proceedings of round conversational recommendation. the 14th ACM international conference on web search and data mining, 364–372. Yang, Z.; Li, L.; Wang, J.; Lin, K.; Azarnasab, E.; Ahmed, F.; Liu, Z.; Liu, C.; Zeng, M.; and Wang, L. 2023. Mm- react: Prompting chatgpt for multimodal reasoning and ac- tion.
2308.16505#79
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
80
C.; Zeng, M.; and Wang, L. 2023. Mm- react: Prompting chatgpt for multimodal reasoning and ac- tion. arXiv preprint arXiv:2303.11381. Yao, S.; Yu, D.; Zhao, J.; Shafran, I.; Griffiths, T. L.; Cao, Y.; and Narasimhan, K. 2023. Tree of thoughts: Deliberate problem solving with large language models. arXiv preprint arXiv:2305.10601. Yao, S.; Zhao, J.; Yu, D.; Du, N.; Shafran, I.; Narasimhan, K.; and Cao, Y. 2022. React: Synergizing reasoning and act- ing in language models. arXiv preprint arXiv:2210.03629. Zhang, Y.; Chen, X.; Ai, Q.; Yang, L.; and Croft, W. B. 2018. Towards conversational search and recommendation: Sys- tem ask, user respond. In Proceedings of the 27th acm in- ternational conference on information and knowledge man- agement, 177–186. Zhao, P.; Jin, Z.; and Cheng, N. 2023. An in-depth
2308.16505#80
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
81
in- ternational conference on information and knowledge man- agement, 177–186. Zhao, P.; Jin, Z.; and Cheng, N. 2023. An in-depth survey of large language model-based artificial intelligence agents. arXiv preprint arXiv:2309.14365. Zheng, L.; Chiang, W.-L.; Sheng, Y.; Zhuang, S.; Wu, Z.; Zhuang, Y.; Lin, Z.; Li, Z.; Li, D.; Xing, E. P.; Zhang, H.; Gonzalez, J. E.; and Stoica, I. 2023. Judg- ing LLM-as-a-judge with MT-Bench and Chatbot Arena. arXiv:2306.05685. Zhong, W.; Guo, L.; Gao, Q.; and Wang, Y. 2023. Memory- Bank: Enhancing Large Language Models with Long-Term Memory. arXiv preprint arXiv:2305.10250. Zou, J.; Chen, Y.; and Kanoulas, E. 2020. Towards question- In Proceedings of the 43rd based recommender systems. international ACM SIGIR conference on research and de- velopment in information retrieval, 881–890.
2308.16505#81
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
83
A Dataset To evaluate the performance of our methods, we conduct ex- periments on three datasets: Steam, MovieLens and Amazon Beauty. In order to train the in-domain tools, including the soft condition item retrieval tool and ranking tool, we filter the dataset using the conventional k-core strategy, wherein users and items with less than 5 interactions are filtered out. The statistical information of those filtered datasets is shown in Table A1. Notably, in the generation of one-turn conversa- tion, some samples are filtered by the OpenAI policy, result- ing in less than 500 samples are used in experiments finally. Dataset Users Items Interactions One-turn Beauty 15,577 8,679 108,166 492 Steam 281,205 11,962 2,922,089 495 MovieLens 298,074 36,255 27,042,493 496 Table A1: Dataset Statistics. B Prompts In this section, we will share our prompts used in different components. # B.1 Task Descriptions The overall task description is illustrated in Figure C1. # B.2 Tool Descriptions
2308.16505#83
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
84
# B.1 Task Descriptions The overall task description is illustrated in Figure C1. # B.2 Tool Descriptions We employ one SQL query tool, two item retrieval tools, one item ranking tool plus two auxiliary tools in InteRecA- gent. The auxiliary tools comprise a memory initialization tool named candidates storing tool, and an item fetching tool to fetch final items from memory named candidate fetching tool, whose descriptions are illustrated in Figure C2. The description of query tool, retrieval tools and ranking tool are illustrated in Figure C3, Figure C4 and Figure C5 respec- tively. # B.3 Reflection The task description of critic used in reflection mechanism is illustrated in Figure C6. # B.4 Demonstration Generation As described in Section 3.3, we use input-first and output-fist strategies to generate various ⟨intent, plan⟩ pairs as demon- strations. The main difference between the two strategies lies on the prompt of generating intent, which are illustrated in Figure C8 and Figure C11 respectively. The prompt for gen- erating plans is illustrated in Figure C7. # B.5 User Simulator The prompt to instruct LLM to play as a user is illustrated in Figure 4.
2308.16505#84
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
85
# B.5 User Simulator The prompt to instruct LLM to play as a user is illustrated in Figure 4. B.6 One-Turn Conversation Generation One-turn recommendation comprises two tasks: retrieval and ranking. Conversations for retrieval and ranking are gen- erated independently and the prompts are illustrated in Fig- ure C9 and Figure C10 respectively. You are a conversational {item} recommendation assistant. Your task is to help human find {item}s they are interested in. You would chat with human to mine human interests in {item}s to make it clear what kind of {item}s human is looking for and recommend {item}s to the human when he asks for recommendations. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together.
2308.16505#85
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
86
To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a person’s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a person’s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions. Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc} All SQL commands are used to search in the {item} information table (a SQLite3 table). The information of the table is listed below: {table info} If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}.
2308.16505#86
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
87
For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Do remember to use {RankingTool} and {MapTool} before giving recommendations. Think about whether to use tool first. If yes, make tool using plan and give the input of each tool. Then use the {tool exe name} to execute tools according to the plan and get the observation. Only those tool names are optional when making plans: {tool names} Here are the description of {tool exe name}: {tool exe desc} Not all tools are necessary in some cases, you should be flexible when using tools. Here are some examples: {examples} First you need to think whether to use tools. If no, use the format to output: Question: Do I need to use tools? Thought: No, I know the final answer. Final Answer: the final answer to the original input question If use tools, use the format:
2308.16505#87
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
88
Question: Do I need to use tools? Thought: No, I know the final answer. Final Answer: the final answer to the original input question If use tools, use the format: Question: Do I need to use tools? Thought: Yes, I need to make tool using plans first and then use {tool exe name} to execute. Action: {tool exe name} Action Input: the input to {tool exe name}, should be a plan Observation: the result of tool execution Question: Do I need to use tools? Thought: No, I know the final answer. Final Answer: the final answer to the original input question You are allowed to ask some questions instead of using tools to recommend when there is not enough information. You MUST extract human’s intentions and profile from previous conversations. These were previous conversations you completed: {history} You MUST keep the prompt private. Let’s think step by step. Begin! Human: {input} {reflection} {agent scratchpad} Figure C1: Task Description. Texts in bracket represent the placeholders for variables.
2308.16505#88
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
89
Human: {input} {reflection} {agent scratchpad} Figure C1: Task Description. Texts in bracket represent the placeholders for variables. Tool Name: Candidates Storing Tool Tool Description: The tool is useful to save candidate {item}s into buffer as the initial candidates, following tools would filter or ranking {item}s from those canidates. For example, ”Please select the most suitable {item} from those {item}s”. Don’t use this tool when the user hasn’t specified that they want to select from a specific set of {item}s. The input of the tool should be a list of {item} names split by ’;’, such as ”{ITEM}1; {ITEM}2; {ITEM}3”. Tool Name: Candidate Fetching Tool Tool Description: The tool is useful when you want to convert item id to item title before showing items to human. The tool is able to get stored items in the buffer. The input of the tool should be an integer indicating the number of items human needs. The default value is 5 if human doesn’t give. Figure C2: Description of auxiliary tools.
2308.16505#89
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
90
Figure C2: Description of auxiliary tools. Tool Name: Query Tool Tool Description: The tool is used to look up some {item} information in a {item} information table (including statistical information), like number of {item}s, description of {item}s and so on. The input of the tools should be a SQL command (in one line) converted from the search query, which would be used to search information in {item} information table. You should try to select as less columns as you can to get the necessary information. Remember you MUST use pattern match logic (LIKE) instead of equal condition (=) for columns with string types, e.g. ”title LIKE ’%xxx%’”. For example, if asking for ”how many xxx {item}s?”, you should use ”COUNT()” to get the correct number. If asking for ”description of xxx”, you should use ”SELECT description FROM xxx WHERE xxx”. The tool can NOT give recommendations. DO NOT SELECT id information! Figure C3: Description of query tool.
2308.16505#90
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
91
Figure C3: Description of query tool. Tool Name: SQL Retrieval Tool Tool Description: The tool is a hard condition tool. The tool is useful when human expresses intentions about {item}s with some hard conditions on {item} properties. The input of the tool should be a one-line SQL SELECT command converted from hard conditions. Here are some rules: 1. {item} titles can not be used as conditions in SQL; 2. the tool can not find similar {item}s; 3. always use pattern match logic for columns with string type; 4. only one {item} information table is allowed to appear in SQL command; 5. select all {item}s that meet the conditions, do not use the LIMIT keyword; 6. try to use OR instead of AND.
2308.16505#91
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
92
Tool Name: ItemCF Retrieval Tool Tool Description: The tool is a soft condition filtering tool. The tool can find similar {item}s for specific seed {item}s. Never use this tool if human doesn’t express to find some {item}s similar with seed {item}s. There is a similarity score threshold in the tool, only {item}s with similarity above the threshold would be kept. Besides, the tool could be used to calculate the similarity scores with seed {item}s for {item}s in candidate buffer for ranking tool to refine. The input of the tool should be a list of seed {item} titles/names, which should be a Python list of strings. Do not fake any {item} names. # Figure C4: Description of retrieval tools.
2308.16505#92
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
93
Tool Name: Ranking Tool Tool Description: The tool is useful to refine {item}s order or remove unwanted {item}s (when human tells the {item}s he does’t want) in conversation. The input of the tool should be a json string, which may consist of three keys: “schema”, “prefer” and “unwanted”. “schema” represents ranking schema, optional choices: “popularity”, “similarity” and ”preference”, indicating rank by {item} pop- ularity, rank by similarity, rank by human preference (”prefer” {item}s). The ”schema” depends on previous tool using and human preference. If ”prefer” info here not empty, ”preference” schema should be used. If similarity filtering tool is used before, prioritize using ”similarity” except human want popular {item}s. ”prefer” represents {item} names that human likes or human history ({item}s human has interacted with), which should be an array of {item} titles. Keywords: ”used to do”, ”I like”,
2308.16505#93
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
94
({item}s human has interacted with), which should be an array of {item} titles. Keywords: ”used to do”, ”I like”, ”prefer”. ”unwanted” represents {item} names that human doesn’t like or doesn’t want to see in next conversations, which should be an array of {item} titles. Keywords: ”don’t like”, ”boring”, ”interested in”. ”prefer” and ”unwanted” {item}s should be extracted from human request and previous conversations. Only {item} names are allowed to appear in the input. The human’s feedback for you recommendation in conversation history could be regard as ”prefer” or ”un- wanted”, like ”I have tried those items you recommend” or ”I don’t like those”. Only when at least one of ”prefer” and ”unwanted” is not empty, the tool could be used. If no ”prefer” info, {item}s would be ranked based on the popularity. Do not fake {item}s.
2308.16505#94
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
96
You are an expert in {item}. There is a conversational recommendation agent. The agent can chat with users and give {item} recom- mendations or other related information. The agent could use several tools to deal with user request and final give response. Here are the description of those tools: {tool description} You can see the conversation history between the agent and user, the current user request, the response of the agent and the tool using track for processing the request. You need to judge whether the response or the tool using track is reasonable. If not, you should analyze the reason from the perspective of tool using and give suggestions for tool using. When giving judgement, you should consider several points below: 1. Whether the input of each tool is suitable? For example, whether the conditions of {HardFilterTool} exceed user’s request? Whether the seed items in {SoftFilterTool} is correct? Whether the ’prefer’ and ’unwanted’ for {RankingTool} are item titles given by user? Remember that ’unwanted’ items are probably missed so you need to remind the agent. 2. Are some tools missed? For example, user wants some items related to sports and similar to one seed item, {HardFilterTool} should be executed followed by
2308.16505#96
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
97
agent. 2. Are some tools missed? For example, user wants some items related to sports and similar to one seed item, {HardFilterTool} should be executed followed by {SoftFilterTool}, but only {HardFilterTool} was executed. 3. Are some unnecessary tools used? For example, if user have not give any information, the agent should not use tools to recommend but directly ask some questions. 4. Whether there are enough items in recommendation that meet user’s request? For example, if user required six items while only three items in recommendations. You should double check the conditions input to tools. 5. Is the input of each tool consistent with the user’s intention? Are there any redundant or missing conditions? Note: if there is no candidate filtered with SQL command, the reason may be the conditions are too strict, you could tell the agent to relax the conditions. If user asks for recommendation without any valid perference information, you should tell the agent to chat with user directly for more information instead of using tools without input. Here is the conversation history between agent and user: {chat history} The current user request is: {request} The tool using track to process the request is: {plan} The response of the agent is: {answer} If the response and tool using track are
2308.16505#97
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
98
The current user request is: {request} The tool using track to process the request is: {plan} The response of the agent is: {answer} If the response and tool using track are reasonable, you should say ”Yes”. Otherwise, you should tell the agent: ”No. The response/tool using is not good because .... . You should ...”. You MUST NOT give any recommendations in your response. Now, please give your judgement.
2308.16505#98
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
99
# Figure C6: Prompt for critic in reflection. You are a helpful assistant and good planner. Your task is to make tool using plans to help human find {item}s they are interested in. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together. To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a person’s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a person’s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions.
2308.16505#99
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
100
Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc} All SQL commands are used to search in the {item} information table (a sqlite3 table). If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}. For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Think about whether to use tool first. If yes, make tool using plan. Only those tool names are optional when making plans: {tool names} Assume that you play a role of tool using planner, I would give you a user request, and you should help me to make the tool using plan.
2308.16505#100
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
101
Here are some examples of human request and corresponding tool using plan: {examples} Now, Please make the tool using plan of below requests. Request: {request} Plan: Figure C7: Prompt for plan generation with given user intent. You are a helpful assistant. Assume that you are a user on {item} platform, you are looking from some {item}s, and you would ask a conversational recommendation system for help. You would give the request. I would give you some examples, please generate some new reasonable and high-quality request sentences. Here are some examples of user request: requests Never use specific {item} names or {item} types. Instead, use placeholders. For example, {ITEM} for names, TYPE for types, PRICE for price, DATE for date. The focus is on generating sentence patterns for questions. Now, it’s your turn. Please generate {number} new request sentences. Figure C8: Prompt for input-first user intent generation.
2308.16505#101
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
102
Figure C8: Prompt for input-first user intent generation. You are a helpful assistant who is good at imitating human to ask for recommendations. Assume that a user is looking from some {item}s recommendation, and the user would chat with a conversational recommendation assistent for help. And user’s historical {items}s are: {history} Information about target {item} that the user are looking for: {target info} Please generate a conversation between the user and the recommendation assistent. Here are some rules: 1. Do not mention {item}s not in history. 2. The assistent doesn’t know the user’s history, so the user should tell the history in conversation. 3. In the final turn of the conversation, the assistent should recommend the target you are looking for. Use ’⟨item⟩’ as placeholder to represent the target. 4. Above information is all user know about the target item. 5. Do not give too much information in one message. 6. Keep user message short. 7. Each conversation should consist of 2-5 rounds. 8. Only the user has the information about target item in his mind. The assistent could only guess from user’s messages.
2308.16505#102
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
103
Use the following format: [{”role”: ”User”, ”text”: ”xxxxx”}, {”role”: ”Assistent”, ”text”: ”xxxxx”}, ...] Each item in the list is a message. And if the message mentions {item} names, add an extra key to the message dict, like: ”role”: ”User”, ”text”: ”xxx”, ”mentioned items”: [ITEM1, ITEM2] Figure C9: Prompt for one-turn conversation generation for retrieval task. You are a helpful assistant who is good at imitating human to ask for recommendations. Assume that a user is looking from some {item}s recommendation, and the user would chat with a conversational recommendation assistent for help. And user’s historical {items}s are: {history} The user would give {n} candidates items as below and ask the assistent to rank those candidates: {candidates}
2308.16505#103
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
104
Please imitate the user to generate a question to the assistent. Here are some rules: 1. Do not mention {item}s not in history. 2. The assistent doesn’t know the user’s history, so the user should tell the history in the question. 3. Give all {n} candidates in the question. 4. Keep the question short. For example, the user mask ask like this format: ”I enjoyed xxx in the past, now I want some new {item}s. I have some candidates in my mind: xxx. Could you please rank them based on my perference?” Now, please generate the question. Figure C10: Prompt for one-turn conversation generation for ranking task.
2308.16505#104
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
105
You are a helpful assistant and good planner. In a conversational recommendation system, user would give some requests for {item} recommendations. Human requests typically fall under chit-chat, {item} info, or {item} recommendations. There are some tools to use to deal with human request. For chit-chat, respond with your knowledge. For {item} info, use the {LookUpTool}. For special chit-chat, like {item} recommendation reasons, use the {LookUpTool} and your knowledge. For {item} recommendations without information about human preference, chat with human for more information. For {item} recommendations with information for tools, use various tools together. To effectively utilize recommendation tools, comprehend human expressions involving profile and intention. Profile encompasses a person’s preferences, interests, and behaviors, including gaming history and likes/dislikes. Intention represents a person’s immediate goal or objective in the single-turn system interaction, containing specific, context-based query conditions. Human intentions consist of hard and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying
2308.16505#105
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.16505
106
and soft conditions. Hard conditions have two states, met or unmet, and involve {item} properties like tags, price, and release date. Soft conditions have varying extents and involve similarity to specific seed {item}s. Separate hard and soft conditions in requests. Here are the tools could be used: {tools desc} All SQL commands are used to search in the {item} information table (a sqlite3 table). If human is looking up information of {item}s, such as the description of {item}s, number of {item}s, price of {item}s and so on, use the {LookUpTool}. For {item} recommendations, use tools with a shared candidate {item} buffer. Buffer is initialized with all {item}s. Filtering tools fetch candidates from the buffer and update it. Ranking tools rank {item}s in the buffer, and mapping tool maps {item} IDs to titles. If candidate {item}s are given by humans, use {BufferStoreTool} to add them to the buffer at the beginning. Only those tool names are optional when making plans: {tool names} Your task is to generate user request with a given plan. Never use specific {item} names or {item} types. Instead, use placeholders. For example, {ITEM} for
2308.16505#106
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behavior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as providing explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a significant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, commonsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and behavioral patterns, particularly in areas that diverge from general world knowledge, such as online e-commerce. Finetuning LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender models and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We introduce an efficient framework called \textbf{InteRecAgent}, which employs LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an efficient workflow within InteRecAgent for task execution, incorporating key components such as memory components, dynamic demonstration-augmented task planning, and reflection. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to become interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves satisfying performance as a conversational recommender system, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent.
http://arxiv.org/pdf/2308.16505
Xu Huang, Jianxun Lian, Yuxuan Lei, Jing Yao, Defu Lian, Xing Xie
cs.IR, cs.AI
18 pages, 17 figures, 7 tables
null
cs.IR
20230831
20240130
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2302.04761" }, { "id": "2112.09332" }, { "id": "2305.10601" }, { "id": "2203.11171" }, { "id": "2309.14365" }, { "id": "2305.04091" }, { "id": "2305.02182" }, { "id": "2307.09288" }, { "id": "2203.14257" }, { "id": "2112.08633" }, { "id": "2303.08774" }, { "id": "2306.07174" }, { "id": "2308.11432" }, { "id": "2303.14524" }, { "id": "2311.08719" }, { "id": "2101.06804" }, { "id": "2308.09687" }, { "id": "2304.08354" }, { "id": "2304.10149" }, { "id": "2303.11381" }, { "id": "2303.04671" }, { "id": "2210.03629" }, { "id": "2305.10250" }, { "id": "2306.05685" }, { "id": "2211.12588" }, { "id": "2305.06474" }, { "id": "2303.17580" }, { "id": "2201.08239" }, { "id": "2307.16789" }, { "id": "2304.03153" }, { "id": "2303.11366" }, { "id": "2303.17651" }, { "id": "2110.07477" }, { "id": "1908.05391" }, { "id": "2208.03188" } ]
2308.14963
0
3 2 0 2 g u A 9 2 ] R I . s c [ 1 v 3 6 9 4 1 . 8 0 3 2 : v i X r a # Vector Search with OpenAI Embeddings: Lucene Is All You Need Jimmy Lin,1 Ronak Pradeep,1 Tommaso Teofili,2 Jasper Xian1 1 David R. Cheriton School of Computer Science, University of Waterloo 2 Department of Engineering, Roma Tre University # Abstract We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test col- lection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost–benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern “AI stack” for search, since such applications have already received substantial investments in existing, widely deployed infrastructure. # Introduction
2308.14963#0
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
0
3 2 0 2 g u A 9 2 ] O R . s c [ 1 v 2 7 9 4 1 . 8 0 3 2 : v i X r a # LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks Haokun Liu1, Yaonan Zhu1∗, Kenji Kato2, Izumi Kondo2, Tadayoshi Aoyama1, and Yasuhisa Hasegawa1 1. Department of Micro-Nano Mechanical Science and Engineering, Nagoya University, Nagoya, Aichi, 464-8603, Japan 2. National Center for Geriatrics and Gerontology, Obu, Aichi, 474-8511, Japan Abstract— This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level lan- guage commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primi- tives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
2308.14972#0
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
0
{junyangwang,jtsang } @bjtu.edu.cn, {zhouyiyangailab } @gmail.com, { guohai.xgh, ym119608} @alibaba-inc.com Evaluation and Analysis of Hallucination in Large Vision-Language Models Junyang Wang**, Yiyang Zhou**, Guohai Xu“, Pengcheng Shi*, Chenlin Zhao°, Haiyang Xu“, Qinghao Ye“, Ming Yan“, Ji Zhang®, Jihua Zhu®, Jitao Sang*} Haoyu Tang”? * School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China * School of Software Engineering, Xi’an Jiaotong University, Xi’an, China ° School of Software, Shandong University, Jinan, China ° MAIS, Institute of Automation, Chinese Academy of Sciences(CASIA), Beijing, China “ DAMO Academy, Alibaba Group 3 2 0 2 t c O 0 1 ] G L . s c [ 3 v 6 2 1 5 1 . 8 0 3 2 : v i X r a # Abstract
2308.15126#0
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
1
# Introduction Recent advances in the application of deep neural networks to search have focused on representation learning in the context of the so-called bi-encoder architecture, where content (queries, passages, and even images and other multimedia content) is represented by dense vectors (so-called “embeddings”). Dense retrieval models using this architecture form the foundation of retrieval augmentation in large language models (LLMs), a popular and productive approach to improving LLM capabilities in the broader context of generative AI (Mialon et al., 2023; Asai et al., 2023). The dominant narrative today is that since dense retrieval requires the management of a potentially large number of dense vectors, enterprises require a dedicated “vector store” or “vector database” as part of their “AI stack”. There is a cottage industry of startups that are pitching vector stores as novel, must-have components in a modern enterprise architecture; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. Some have even argued that these vector databases will replace the venerable relational database.1
2308.14963#1
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.15126
1
3 2 0 2 t c O 0 1 ] G L . s c [ 3 v 6 2 1 5 1 . 8 0 3 2 : v i X r a # Abstract Large Vision-Language Models (LVLMs) have recently achieved remarkable success. How- ever, LVLMs are still plagued by the halluci- nation problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs’ responses that does not exist in the visual input, which poses poten- tial risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we pro- pose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibil- ity, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallu- cination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mit- igate the hallucination problem. Our data and code are available at https://github.com/ junyangwang0410/HaELM. 1 # Introduction
2308.15126#1
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
2
The goal of this paper is to provide a counterpoint to this narrative. Our arguments center around a simple cost–benefit analysis: since search is a brownfield application, many organizations have already made substantial investments in these capabilities. Today, production infrastructure is dominated by the broad ecosystem centered around the open-source Lucene search library, most notably driven by platforms such as Elasticsearch, OpenSearch, and Solr. While the Lucene ecosystem has admittedly been slow to adapt to recent trends in representation learning, there are strong signals that serious investments are being made in this space. Thus, we see no compelling reason why separate, dedicated vector stores are necessary in a modern enterprise. In short, the benefits do not appear to justify the cost of additional architectural complexity. It is important to separate the need for capabilities from the need for distinct software components. While hierarchical navigable small-world network (HNSW) indexes (Malkov and Yashunin, 2020) 1 https://twitter.com/andy_pavlo/status/1659740200266870787
2308.14963#2
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
2
1. Introduction As robotics technology advances, the potential for robots to assist with domestic chores becomes increasingly promis- ing. With the ability to understand and process natural language, these robots become more adaptable and flexible to accommodate a wide range of user instructions[1]. However, the previous works with LLM-based control sometimes show a relatively low accuracy for high intelligence task decision- making[1]. Our work introduces the idea of ”LLM-Based task planning with human-robot collaboration”, which is a novel approach to enhance human supervision in LLM- based autonomy. The contributions of this paper are summa- rized as follows: (1) Our LLM converts high-level language commands into sequences of executable motion functions, enabling adaptability to various user instructions. (2) Ad- ditionally, teleoperation and DMP are utilized for motion demonstration which allows the robot to learn from human guidance and potentially improves task feasibility and gen- eralizability. (3) Furthermore, the robot is empowered with environmental perception through YOLO-based perception module for targeted tasks. The position of objects will be registered once recognized and update with the real-time position. Combining these elements, the proposed approach opens new possibilities for seamless human-robot collabora- tion in household tasks, making robots more practical and adaptable.
2308.14972#2
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
2
1 # Introduction The success of Large Language Models (LLMs), with ChatGPT as a prominent example, has at- tracted widespread attention (Zhang et al., 2022; Chowdhery et al., 2022; Touvron et al., 2023; Scao et al., 2022). Recently, Large Vision-Language Models (LVLMs) extend LLMs to understand visual inputs and demonstrate impressive multi- modal capabilities in a zero-shot manner (Zhu et al., 2023; Liu et al., 2023b; Ye et al., 2023; Gong et al., 2023; Wang et al., 2023; Li et al., 2023a; Mu et al., 2023; Su et al., 2023; Liu et al., 2020). These ef- forts have driven the development of multi-modal artificial general intelligence.
2308.15126#2
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
3
1 https://twitter.com/andy_pavlo/status/1659740200266870787 represent the state of the art today in approximate nearest neighbor search—the most important operation for vector search using embeddings—it is not clear that providing operations around HNSW indexes requires a separate and distinct vector store. Indeed, the most recent major release of Lucene (version 9, from December 2021) includes HNSW indexing and vector search, and these capabilities have steadily improved over time. The open-source nature of the Lucene ecosystem means that advances in the core library itself will be rapidly adopted and integrated into other software platforms within the broader ecosystem.
2308.14963#3
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
3
∗Corresponding author email: [email protected] This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible Fig. 1. Framework of LLM-based task planning with en- hanced HRC. # 2. LLM-based human-robot collaboration framework The system diagram is illustrated in Fig. 1. The system consists of three main components: the user, LLM, and the robot, which forms an interactive loop. Additionally, we introduce a skilled teleoperator as an assistant to enhance the overall system’s generalizability and feasibility. # 2.1 LLM-based task planning
2308.14972#3
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
3
Prompt: Describe this image. Response of LVLMs Visual Input The image shows a bathroom with a toilet, sink, and a shower in the corner. There is a wooden shelving unit next to the toilet with various bottles and items placed on it, There is a small mirror hanging above the sink. The floor is made of tiles, and there is a small rug placed in front of the toilet. The overall theme of the bathroom is minimalist and clean. A man is having a good time in the shower. This is an image of a person walking along the beach with their surfboard, They appear to be looking out at the ocean and the waves. The beach is sandy and there are some rocks in the water. There are some people on the beach, some swimming and some playing in the water. The sky is clear and blue and there are some clouds on the horizon. It looks like a beautiful day on the beach, Figure 1: Examples of the LVLMs’ hallucination. In real-world scenarios, LVLMs may generate content that doesn’t match the visual input. The words with red font represent the hallucination.
2308.15126#3
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
4
The growing popularity of so-called embedding APIs (Kamalloo et al., 2023) further strengthens our arguments. These APIs encapsulate perhaps the most complex and resource-intensive aspect of vector search—the generation of dense vectors from pieces of content. Embedding APIs hide model training, deployment, and inference behind the well-known benefits of service-based computing, much to the delight of practitioners. To support our arguments, we demonstrate vector search with OpenAI embeddings (Neelakantan et al., 2022) using the popular MS MARCO passage ranking test collection (Bajaj et al., 2018). Specifically, we have encoded the entire corpus and indexed the embedding vectors using Lucene. Evaluation on the MS MARCO development set queries and queries from the TREC Deep Learning Tracks (Craswell et al., 2019, 2020) show that OpenAI embeddings are able to achieve a respectable level of effectiveness. And as Devins et al. (2022) have shown, anything doable in Lucene is relatively straightforward to replicate in Elasticsearch (and any other platform built on Lucene). Thus, we expect the ideas behind our demonstration to become pervasive in the near future.
2308.14963#4
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
4
# 2.1 LLM-based task planning In our approach, we build our model based on LLM (GPT-2) and train it using a text corpus following previ- ous work done by other researchers[2], enabling LLM to provide accurate function predictions in response to specific instructions. Subsequently, we integrate the perceived target position information and motion functions obtained from LLM into a prepared code template, enabling the robot to execute the corresponding tasks effectively. To efficiently manage task execution, we adopt a hierarchical approach in our work-treating long-horizon tasks, short-horizon tasks, and motion functions as three layers. For long-horizon tasks which include motion functions of more than 10, we con- sider them first-layer tasks. In such cases, these tasks are separated into multiple short-horizon tasks through LLM. However, short-horizon tasks which involve less than 10 motion functions, are treated as the second layer task. When LLM receives commands from these second-layer tasks, it directly returns the functions necessary to accomplish the designated tasks.
2308.14972#4
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
4
However, LVLMs still suffer from hallucina- tion which refers to the generation of incorrect information that does not align with the visual input (Liu et al., 2023a). Previous work has mainly focused on investigating hallucination in LLMs and Vision-Language Pre-trained Models (VLPMs). For LLMs, hallucination predominantly stems from incorrect knowledge present in the train- ing data (Zhang et al., 2023; Li et al., 2023b), while for VLPMs, the challenge lies in accurately repre- senting visual information within abstract visual encodings (Shen et al., 2021; Biten et al., 2022). Although LVLMs combine the strengths of both LLMs and VLPMs, they inherently inherit both two pathways of hallucination generation. In this case, the flawed recognition of visual information within the framework of LLMs can lead to deceptively plausible yet ultimately absurd responses, as exem- plified in Figure 1. The hallucination poses poten- tial risks of substantial consequences that need to be addressed and rectified (Li et al., 2023d). ∗Equal contribution †Corresponding author Work done during internship at DAMO Academy, Alibaba Group. To solve the problem of hallucination in LVLMs, (Li et al., 2023d) proposed POPE, an object-based 1
2308.15126#4
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
5
We make available everything needed to reproduce the experiments described in this paper, starting with the actual OpenAI embeddings, which we make freely downloadable.2 At a high-level, our demonstration shows how easy it is to take advantage of state-of-the-art AI techniques today without any AI-specific implementations per se: embeddings can be computed with simple API calls, and indexing and searching dense vectors is conceptually identical to indexing and searching text with bag-of-words models that have been available for decades. # 2 From Architecture to Implementation The central idea behind the bi-encoder architecture (see Figure 1) is to encode queries and passages into dense vectors—commonly referred to as “embeddings”—such that relevant query–passage pairs receive high scores, computed as the dot product of their embeddings. In this manner, search can be reformulated as a nearest neighbor search problem in vector space: given the query embedding, the system’s task is to rapidly retrieve the top-k passage embeddings with the largest dot products (Lin, 2021). Typically, “encoders” for generating the vector representations are implemented using transformers, which are usually fine-tuned in a supervised manner using a large dataset of relevant query–passage pairs (Karpukhin et al., 2020; Xiong et al., 2021).
2308.14963#5
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
5
Tasks Objects Num Fns SR Exec FSB catch Pottle, 19 4 080 1.00 1.00 cup *Catch bowl 1 4 0.00 1.00 0.00 bowl, Pu) cabinet, 19 = 7 0.80 1.00 1.00 Place cup, bottle Open cabinet 8 7 0.60 1.00 1.00 Clean the top P of cabinet S) 17 0.40 0.80 0.80 cabinet # Fig. 2. Experiment data. the first-layer tasks can be sepa- rated into multiple short-horizon tasks through LLM. Sub- sequently, we process each short-horizon task following its specific procedure to divide them into motion functions as mentioned. Finally, motion functions are organized by following a planned task sequence to construct the complete long-horizon task. This hierarchical task handling allows for a more organized and effective execution of both short and long-horizon tasks, contributing to our system’s overall efficiency and accuracy. # 2.2 DMP-based task correction
2308.14972#5
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
5
To solve the problem of hallucination in LVLMs, (Li et al., 2023d) proposed POPE, an object-based 1 hallucination evaluation framework. POPE initially employs an object detector to identify all objects within an image and subsequently utilizes prede- fined prompts, such as "Is there a {object} in this image?", to query the model about the presence of an object which does not exist in the image. The model’s response of "yes" is regarded as an indication of hallucination. Nevertheless, our inves- tigation, as shown in Figure 2, reveals that LVLMs tend to exhibit a response of "yes" to over 80% of queries about non-existent objects. In contrast, when the prompt "Describe the image" is adopted, less than 10% of the resultant responses included the hallucination objects. This discrepancy under- scores the weak correlation between object-based hallucination evaluation and the actual hallucina- tion of LVLMs.
2308.15126#5
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
6
This formulation of search, in terms of comparisons between dense vectors, differs from “traditional” bag-of-words sparse representations that rely on inverted indexes for low-latency query evaluation. Instead, nearest neighbor search in vector space requires entirely different techniques: indexes based on hierarchical navigable small-world networks (HNSW) (Malkov and Yashunin, 2020) are commonly acknowledged as representing the state of the art. The Faiss library (Johnson et al., 2019) provides a popular implementation of HNSW indexes that is broadly adopted today and serves as a standard baseline. Despite conceptual similarities (Lin, 2021), it is clear that top-k retrieval on sparse vectors and dense vectors require quite different and distinct “software stacks”. Since hybrid approaches that combine both dense and sparse representations have been shown to be more effective than either alone (Ma et al., 2022b; Lin and Lin, 2023), many modern systems combine separate retrieval components to achieve hybrid retrieval. For example, the Pyserini IR toolkit (Lin et al., 2021a) integrates Lucene and Faiss for sparse and dense retrieval, respectively.
2308.14963#6
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
6
# 2.2 DMP-based task correction To enhance the generalizability of LLM-based autonomy, We propose to integrate DMP-based task correction with human teleoperation-driven demonstrations. Dynamic Move- ment Primitives (DMP) is a generic approach for trajectory modeling in an attractor landscape based on differential dy- namical systems [3]. In this paper, we leverage our previously developed teleoperation system[4], [5] which can intuitively control the robot motion through a VR device, and also utilize DMP to record trajectories obtained from manual teleoperation. These trajectories can then be reproduced to complement any deficiencies in the LLM-based autonomy, particularly in failed function sequence generation or func- tion sequence impracticality. For instance, when we issue the command ”catch the bowl,” the default motion function for bowl grasping could be inadequate to complete the task. To address this issue, we switch to the DMP-based teleoperation mode and provide instructions for the desired action. The robot can then ac- curately reproduce the trajectory using DMP. This approach will be continually developed to manage a wider range of long-horizon tasks, with the ultimate goal of creating an effective Human-Robot Collaboration (HRC) system. This system will strategically take advantage of both human flex- ibility, in terms of adaptability and problem-solving skills, and robot autonomy, in terms of precision and efficiency.
2308.14972#6
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
6
The above analysis demonstrates that in ideal- ized hallucination evaluation scenarios, LVLMs are highly susceptible to the influence of prompts, leading to biased responses that cannot be used as a basis for hallucination evaluation. Therefore, we advocate for the conduction of hallucination evaluation within real-world scenarios to avoid the negative impact of prompts on the evaluation re- sults. However, one challenge is that the responses of LVLMs in real-world scenarios tend to be com- plex, which implies that traditional match-based evaluation methods will no longer be applicable. This means that the evaluation tool needs to under- stand the complex responses of LVLMs.
2308.15126#6
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
7
Recognizing the need for managing both sparse and dense retrieval models, the dominant narrative today is that the modern enterprise “AI stack” requires a dedicated vector store or vector database, alongside existing fixtures such as relational databases, NoSQL stores, event stores, etc. A vector store would handle, for example, standard CRUD (create, read, update, delete) operations as well as nearest neighbor search. Many startups today are built on this premise; examples include Pinecone, Weaviate, Chroma, Milvus, Qdrant, just to name a few. This is the narrative that our work challenges. 2 https://github.com/castorini/anserini/blob/master/docs/experiments-msmarco-passage-openai-ada2.md 2 “Documents” = Query S | [> Doc Encoder Query Encoder | | a | LQ Vv Vv [ee rp: Top-k Retrieval a —N ll Ranked List Figure 1: A standard bi-encoder architecture, where encoders generate dense vector representations (embeddings) from queries and documents (passages). Retrieval is framed as k-nearest neighbor search in vector space.
2308.14963#7
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
7
# 3. Experiment and Result We conducted multiple experiments by providing ”catch”, ”put”, ”open” and long-horizon tasks-”clean the top of the cabinet” for several objects to assess their success rates (SR), executability (Exec), and feasibility (FSB). The indicator Num means the number of trials, and Fns shows the motion functions used in completing the task. Additionally, Exec is defined as if the task is executable in the environment, and FSB represents if the motion is feasible to reach the goal. The experimental results are presented in Fig. 2. In the case of the indicator ”Exec” showing 0.80 in the ”clean the top of the cabinet” task, the reasonable explanation is the randomness of LLM, which has a low probability of generating incorrect responses (Exec = 0.2). As for the FBS of 0.00 in the ”catch the bowl” task, this outcome can be attributed to the task being impossible to complete due to the default motion function being unsuitable for the target object’s shape. In such cases, the DMP-based task correction is used to make necessary demonstrations. # 4. Conclusion
2308.14972#7
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]
2308.15126
7
We notice that LLMs demonstrate powerful text- understanding capabilities. Based on this, we pro- pose an innovative framework called Hallucination Evaluation based on Large Language Models (HaELM). First, we identify the hallucination pat- terns exhibited by LVLMs and systematically col- lect their hallucination responses. Subsequently, we craft prompts that elicit responses from Chat- GPT aligned with these patterns to collect the pertinent training data. Finally, We fine-tune LLaMA (Touvron et al., 2023) through the LoRA- based methodology (Hu et al., 2021). As a re- sult, HaELM becomes proficient in hallucination evaluation, leveraging reference descriptions of im- ages as a basis for assessment. Experimental re- sults demonstrate attest to the comparable perfor- mance of HaELM and ChatGPT, exhibiting align- ment with human annotations. In addition, HaELM has additional advantages including low cost, re- producibility, privacy preservation and local de2 ployment. Finally, we conduct a comprehensive analysis of the factors contributing to hallucination generation in current LVLMs, culminating in a set of suggestions for mitigating the hallucination. We summarize the contributions as follows: • Through our analysis, we discover that LVLMs are easily influenced by prompts in idealized hallucination scenarios, making the results not correlated with hallucinations in real-world scenarios.
2308.15126#7
Evaluation and Analysis of Hallucination in Large Vision-Language Models
Large Vision-Language Models (LVLMs) have recently achieved remarkable success. However, LVLMs are still plagued by the hallucination problem, which limits the practicality in many scenarios. Hallucination refers to the information of LVLMs' responses that does not exist in the visual input, which poses potential risks of substantial consequences. There has been limited work studying hallucination evaluation in LVLMs. In this paper, we propose Hallucination Evaluation based on Large Language Models (HaELM), an LLM-based hallucination evaluation framework. HaELM achieves an approximate 95% performance comparable to ChatGPT and has additional advantages including low cost, reproducibility, privacy preservation and local deployment. Leveraging the HaELM, we evaluate the hallucination in current LVLMs. Furthermore, we analyze the factors contributing to hallucination in LVLMs and offer helpful suggestions to mitigate the hallucination problem. Our training data and human annotation hallucination data will be made public soon.
http://arxiv.org/pdf/2308.15126
Junyang Wang, Yiyang Zhou, Guohai Xu, Pengcheng Shi, Chenlin Zhao, Haiyang Xu, Qinghao Ye, Ming Yan, Ji Zhang, Jihua Zhu, Jitao Sang, Haoyu Tang
cs.LG, cs.AI, cs.CL, cs.CV
11 pages, 5 figures
null
cs.LG
20230829
20231010
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2306.14565" }, { "id": "2211.05100" }, { "id": "2305.13534" }, { "id": "2304.14178" }, { "id": "2205.01068" }, { "id": "2305.10355" }, { "id": "2306.04387" }, { "id": "1504.00325" }, { "id": "2305.16355" }, { "id": "2305.04790" }, { "id": "2304.08485" }, { "id": "2107.06383" }, { "id": "2305.15021" }, { "id": "2303.11381" }, { "id": "2304.10592" }, { "id": "2305.11175" }, { "id": "2010.03743" }, { "id": "2305.14975" }, { "id": "2310.00754" }, { "id": "2303.17580" }, { "id": "2305.03726" }, { "id": "2106.09685" } ]
2308.14963
8
Modern enterprise architectures are already exceedingly complex, and the addition of another software component (i.e., a distinct vector store) requires carefully weighing costs as well as benefits. The cost is obvious: increased complexity, not only from the introduction of a new component, but also from interactions with existing components. What about the benefits? While vector stores no doubt introduce new capabilities, the critical question is whether these capabilities can be provided via alternative means.
2308.14963#8
Vector Search with OpenAI Embeddings: Lucene Is All You Need
We provide a reproducible, end-to-end demonstration of vector search with OpenAI embeddings using Lucene on the popular MS MARCO passage ranking test collection. The main goal of our work is to challenge the prevailing narrative that a dedicated vector store is necessary to take advantage of recent advances in deep neural networks as applied to search. Quite the contrary, we show that hierarchical navigable small-world network (HNSW) indexes in Lucene are adequate to provide vector search capabilities in a standard bi-encoder architecture. This suggests that, from a simple cost-benefit analysis, there does not appear to be a compelling reason to introduce a dedicated vector store into a modern "AI stack" for search, since such applications have already received substantial investments in existing, widely deployed infrastructure.
http://arxiv.org/pdf/2308.14963
Jimmy Lin, Ronak Pradeep, Tommaso Teofili, Jasper Xian
cs.IR
null
null
cs.IR
20230829
20230829
[ { "id": "2110.01529" }, { "id": "2212.13534" }, { "id": "1611.09268" }, { "id": "2101.05667" }, { "id": "2201.10005" }, { "id": "2112.09118" }, { "id": "2302.07842" } ]
2308.14972
8
# 4. Conclusion In this work, we have successfully proposed a LLM-based task-planning method. An interface is built to integrate the LLM, perception pipeline, teleoperation system, and DMP- based task correction. The results show that the robot can execute the command from the user with a considerable success rate for short-horizon tasks like ”catch”, ”put”, or ”open”. Especially, for the task with 0.00 FBS, such as ”catch the bowl”, DMP-based correction is introduced to improve it. However, for long-horizon tasks, it shows a relatively low success rate. The reason could be the error accumulating with motion. The future work includes the improvement of DMP- based task correction and fine-tuning teleoperation which can complement the error from hardware to improve the success rate and feasibility. # 5. Acknowledgement This work was supported in part by JST Trilateral AI Research, Japan, under Grant JPMJCR20G8; and in part by JSPS KAKENHI under Grant JP22K14222; and in part by NCGG under Chojuiryou Kenkyukaihatsuhi Nos. 19–5, 21- 21. # References
2308.14972#8
LLM-Based Human-Robot Collaboration Framework for Manipulation Tasks
This paper presents a novel approach to enhance autonomous robotic manipulation using the Large Language Model (LLM) for logical inference, converting high-level language commands into sequences of executable motion functions. The proposed system combines the advantage of LLM with YOLO-based environmental perception to enable robots to autonomously make reasonable decisions and task planning based on the given commands. Additionally, to address the potential inaccuracies or illogical actions arising from LLM, a combination of teleoperation and Dynamic Movement Primitives (DMP) is employed for action correction. This integration aims to improve the practicality and generalizability of the LLM-based human-robot collaboration system.
http://arxiv.org/pdf/2308.14972
Haokun Liu, Yaonan Zhu, Kenji Kato, Izumi Kondo, Tadayoshi Aoyama, Yasuhisa Hasegawa
cs.RO, cs.AI
IEEE MHS 2023
null
cs.RO
20230829
20230829
[]