id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2308.10379#250
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Since we are considering the fourth vertical word, we need to look at the fourth letters of the horizontal words we found: - What is the fourth letter of rille: l - What is the fourth letter of olein: i - What is the fourth letter of tempt: p The word we are looking fits the clue "To lisp" and needs to have the letters l, i , p, and s in its (h1, one) first, (h2, two) second, (h3, three) third letters : l i p _ _.
2308.10379#249
2308.10379#251
2308.10379
[ "2204.02311" ]
2308.10379#251
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: The candidate is "lisps". - lisps (l i s p s, l i p _ _) doesnâ t fit We have considered all of the words for the horizontal words, we have two options: abase and abate, letâ s choose one of them. answer: h1. rille h2. olein h3. tempt h4. abase v2. ilebo v3. leman v5. enter Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#250
2308.10379#252
2308.10379
[ "2204.02311" ]
2308.10379#252
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: h1. One who saws h2. A fungus genus h3. An assessor h4. Pasture land h5. Receiving by the ear v1. To swell; to increase v2. The Brazilian macaw; an Australian bird v3. A Timorese island v4. Excessive fluid accumulation # v5. Dewy; roscid The words I already found are: h1. sawer h3. rater v1. surge v4. edema
2308.10379#251
2308.10379#253
2308.10379
[ "2204.02311" ]
2308.10379#253
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# Assistant: There are two horizontal and two vertical words. Since they are the amount, letâ s consider the next unfound horizontal word, which is h2. Since we are considering the second horizontal word, we need to look at the second letters of the vertical words we found: - What is the second letter of surge: u - What is the second letter of edema: d The word we are looking fits the clue "A fungus genus" and needs to have the The word we are looking fits the clue "A fungus genus" and needs to have the letters u and d in its (vl, one) first and (v4, four) fourth letters: u __d The candidates are: uredo.
2308.10379#252
2308.10379#254
2308.10379
[ "2204.02311" ]
2308.10379#254
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
letters u and d in its (v1, one) first and (v4, four) fourth letters: u _ _ d _. The candidates are: uredo. - uredo (u r e d o, u _ _ d _) fits We add the word uredo for h2. The current words are: h1. sawer h2. uredo h3. rater v1. surge v4. edema Now, letâ s consider the next unfound vertical word, which is v2.
2308.10379#253
2308.10379#255
2308.10379
[ "2204.02311" ]
2308.10379#255
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Since we are considering the second vertical word, we need to look at the second letters of the horizontal words we found: - What is the second letter of sawer: a - What is the second letter of uredo: r - What is the second letter of rater: a The word we are looking fits the clue "The Brazilian macaw; an Australian bird" and needs to have the letters a, r, and a in its (h1, one) first, (h2, two) second, and (h3, third) third letters: a r a _ _.
2308.10379#254
2308.10379#256
2308.10379
[ "2204.02311" ]
2308.10379#256
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: arara. - arara (a r a r a, a r a _ _) fits We add the word arara for v2. The current words are: h1. sawer h2. uredo h3. rater v1. surge v2. arara v4. edema Now, letâ s consider the next unfound horizontal word, which is h4. Since we are considering the fourth horizontal word, we need to look at the fourth letters of the vertical words we found: - What is the fourth letter of surge: g - What is the fourth letter of arara: r - What is the fourth letter of edema: m the word we are looking fits the clue "Pasture land" and needs to have the letters g, r, and m in its (v1, one) first, (v2, two) second, and (v4, four) fourth letters: g r _ m _.
2308.10379#255
2308.10379#257
2308.10379
[ "2204.02311" ]
2308.10379#257
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: grama. - grama (g r a m a, g r a _ _) fits We add the word grama for h4. The current words are: h1. Sawer h2. Uredos h3. Rater h4. Grama v1. surge v2. arara v4. edema Now, letâ s consider the next unfound vertical word, which is v3. Since we are considering the third vertical word, we need to look at the third letters of the horizontal words we found: - What is the third letter of sawer: w - What is the third letter of uredo: e - What is the third letter of rater: t
2308.10379#256
2308.10379#258
2308.10379
[ "2204.02311" ]
2308.10379#258
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
What is the third letter of grama: a The word we are looking fits the clue "A Timorese island" and needs to have the letters w, e, t, and a in its (h1, one) first, (h2, two) second, (h3, three) third, and (h4, four) fourth letters: w e t a _. The candidates are: The candidate is "wetar". wetar (w e t a r, w e t a _) fits We add the word wetar for v3.
2308.10379#257
2308.10379#259
2308.10379
[ "2204.02311" ]
2308.10379#259
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The current words are: h1. sawer h2. uredo h3. rater h4. grama v1. surge v2. arara v3. wetar v4. edema Now, letâ s consider the last unfound horizontal word, which is h5. Since we are considering the fifth horizontal word, we need to look at the fifth letters of the vertical words we found: - What is the fifth letter of surge: e - What is the fifth letter of arara: a - What is the fifth letter of wetar: r - What is the fifth letter of edema: a The word we are looking fits the clue "Receiving by the ear" and needs to have the letters e, a, r, and a in its (v1, one) first, (v2, two) second, (v3, three) third, and (v4, four) fourth letters: e a r a _.
2308.10379#258
2308.10379#260
2308.10379
[ "2204.02311" ]
2308.10379#260
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: The candidate is "earal". - earal (e a r a l, e a r a _) fits We add the word earal for h5. The current words are: h1. sawer h2. uredo h3. rater h4. grama h5. earal v1. surge v2. arara v3. wetar v4. edema Since we found all horizontal words: answer: h1. sawer h2. uredo h3. rater h4. grama h5. earal v1. surge v2. arara v3. wetar v4. edema Ë
2308.10379#259
2308.10379#261
2308.10379
[ "2204.02311" ]
2308.10379#261
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#260
2308.10379#262
2308.10379
[ "2204.02311" ]
2308.10379#262
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
User: h1. Dandruff; scum; the bull-trout h2. One who greets; to vacillate; a British river h3. A Turkish written decree h4. Mignon; petty; little h5. A bishopâ s permission for a priest to leave a diocese v1. To steal; to brush across v2. A sedge (a primitive three-sided grass) v3. Grape jam v4. A flatworm larva v5.
2308.10379#261
2308.10379#263
2308.10379
[ "2204.02311" ]
2308.10379#263
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ore refuse; to prepare material for glass by heat h2. waver h5. exeat v1. swipe v2. carex # Assistant: Letâ s consider the next unfound horizontal word, which is h1. Since we are considering the first horizontal word, we need to look at the first letters of the vertical words we found: - What is the first letter of swipe: s - What is the first letter of carex: c The word we are looking fits the clue "Dandruff; scum; the bull-trout" and needs to have the letters s and c in its (v1, one) first and (v2, two) second letters: s c _ _ _.
2308.10379#262
2308.10379#264
2308.10379
[ "2204.02311" ]
2308.10379#264
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: scurf. scurf (s c u r f, s c _ _ _) fits We add the word scurf for h1. The current words are: h1. scurf h2. waver h5. exeat v1. swipe v2. carex Now, letâ s consider the next unfound vertical word, which is v3. Since we are Now, letâ s consider the next unfound vertical word, which is v3.
2308.10379#263
2308.10379#265
2308.10379
[ "2204.02311" ]
2308.10379#265
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Since we are considering the third vertical word, we need to look at the third letters of the horizontal words we found: considering the third vertical word, we need to look at the third letters of the horizontal words we found: - What is the third letter of scurf: u - What is the third letter of waver: v - What is the third letter of exeat: e The word we are looking fits the clue "Grape jam" and needs to have the letters u, v, and e in its (h1, one) first, (h2, two) second, and (h5, five) fifth
2308.10379#264
2308.10379#266
2308.10379
[ "2204.02311" ]
2308.10379#266
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
letters: u v _ _ e. The candidates are: The candidate is "uvetta". uvetta (u v e t t a, u v _ _ e) doesnâ t fit Letâ s consider the next unfound vertical word, which is v4. Since we are considering the fourth vertical word, we need to look at the fourth letters of the horizontal words we found: - What is the fourth letter of scurf: r - What is the fourth letter of waver: e - What is the fourth letter of exeat: a The word we are looking fits the clue "A flatworm larva" and needs to have the
2308.10379#265
2308.10379#267
2308.10379
[ "2204.02311" ]
2308.10379#267
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
letters r, e, and a in its (h1, one) first, (h2, two) second, and (h5, five) fifth letters: r e _ _ a. The candidates are: redia. redia (r e d i a, r e _ _ a) fits We add the word redia for v4. The current words are: h1. scurf h2. waver h5. exeat v1. swipe v2. carex v4. redia Now, letâ
2308.10379#266
2308.10379#268
2308.10379
[ "2204.02311" ]
2308.10379#268
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
s consider the next unfound horizontal word, which is h3. Since we are considering the third horizontal word, we need to look at the third letters of the vertical words we found: - What is the third letter of swipe: i - What is the third letter of carex: r - What is the third letter of redia: d The word we are looking fits the clue "A Turkish written decree" and needs to have the letters i, r, and d in its (v1, one) first, (v2, two) second, and (v4,
2308.10379#267
2308.10379#269
2308.10379
[ "2204.02311" ]
2308.10379#269
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
four) fourth letters: i r _ d _. The candidates are: irade. irade (i r a d e, i r _ d _) fits We add the word irade for h3. The current words are: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia Now, letâ s consider the last unfound vertical word, which is v5. Since we are
2308.10379#268
2308.10379#270
2308.10379
[ "2204.02311" ]
2308.10379#270
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Now, letâ s consider the last unfound vertical word, which is v5. Since we are considering the fifth vertical word, we need to look at the fifth letters of the horizontal words we found: considering the fifth vertical word, we need to look at the fifth letters of the horizontal words we found: - What is the fifth letter of scurf: f What is the fifth letter of waver: r - What is the fifth letter of irade: e - What is the fifth letter of exeat: t The word we are looking fits the clue "Ore refuse; to prepare material for glass by heat" and needs to have the letters f, r, e, and t in its (h1, one) first, (h2, two) second, (h3, three) third, and (h5, five) fifth letters: f r e _ t.
2308.10379#269
2308.10379#271
2308.10379
[ "2204.02311" ]
2308.10379#271
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
The candidates are: The candidate is "frett". frett (f r e t t, f r e _ t) fits We add the word frett for v5. The current words are: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia v5. frett Since we found all horizontal words: answer: h1. scurf h2. waver h3. irade h5. exeat v1. swipe v2. carex v4. redia v5. frett
2308.10379#270
2308.10379#272
2308.10379
[ "2204.02311" ]
2308.10379#272
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#271
2308.10379#273
2308.10379
[ "2204.02311" ]
2308.10379#273
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# Propose Words User: h1. A lunar valley h2. A fatty oil h3. To entice h4. To lower; to reduce h5. A solitary person v1. According to the roster v2. Another name for Port-Francqui v3. An illicit lover; a European lake v4. To lisp v5. To come in Assistant: All possible 5-letters words for each clue, for each clue we give 10 options: h1.
2308.10379#272
2308.10379#274
2308.10379
[ "2204.02311" ]
2308.10379#274
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
RILLE, VALLIS, CRATER, BASINS, PLAINS h2. OLEIC, OLEIN, GREASE, BUTTER, ANIMAL h3. LURE, TEMPT, SEDUCE, ENTICE, DRAWIN h4. ABASE, LOWER, REDUCE, LESSEN, DEBASE h5. HERMIT, ALONE, LONER, SINGLE, SOLITARY v1. ONTAP, LISTED, ROSTER, SCHEDULE, PLANNED v2.
2308.10379#273
2308.10379#275
2308.10379
[ "2204.02311" ]
2308.10379#275
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
ILEBO, PORTF, CONGO, AFRICA, COLONY v3. LOVER, AMOUR, GENEVA, LEMAN, ZURICH v4. SLUR, LISPS, STUTTER, MUMBLE, STAMMER v5. ENTER, ARRIVE, COMEIN, APPEAR, SHOWUP Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë Ë
2308.10379#274
2308.10379#276
2308.10379
[ "2204.02311" ]
2308.10379#276
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# Creative Writing AoT "Write a coherent passage of 4 short paragraphs. The end sentence of each paragraph must be: {0} Firstly, make five different plans for a coherent passage, then write. Your output should be of the following format: Plan 1: Your plan here. Plan 2: Your plan here. Plan 3: Your plan here. Plan 4: Your plan here. Plan 5: Your plan here. Secondly, given an instruction and several plans, decide which choice is most promising.
2308.10379#275
2308.10379#277
2308.10379
[ "2204.02311" ]
2308.10379#277
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
Analyze each choice in detail, then conclude in the last line "The best choice is {{s}}", where s the integer id of the choice. Thirdly, write the passage according to that chosen plan in the most coherent way. Add "Passage:" before writing the passage under it. Passage: Your passage here. Finally, refine the passage in the most coherent way, but you still have to end each paragraph with the given sentences as before. Final Passage: Final passage here.
2308.10379#276
2308.10379#278
2308.10379
[ "2204.02311" ]
2308.10379#278
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
# Score Prompt Analyze the following passage, then at the last line conclude "Thus the coherency score is {{s}}", where s is an integer from 1 to 10. {0} Acknowledgment: We appreciate the discussions and assistance provided by L. Wang. Contributions: B. Sel played a pivotal role in shaping the primary concept, spearheading the experimental design and eval- uation, and leading the paperâ s writing process. A. Tawaha actively engaged in discussions and conducted experiments.
2308.10379#277
2308.10379#279
2308.10379
[ "2204.02311" ]
2308.10379#279
Algorithm of Thoughts: Enhancing Exploration of Ideas in Large Language Models
V. Khattar collaborated through discussions and played a role in conducting the experiments. R. Jia and M. Jin both engaged in constructive discussions, with M. Jin also offering advisory guidance. Additional info about the changes from the first version (dated 8/20/2023) can be found in this link (https://tinyurl.com/ 2vnjxw93).
2308.10379#278
2308.10379
[ "2204.02311" ]
2308.10053#0
Large Language Models as Zero-Shot Conversational Recommenders
3 2 0 2 g u A 9 1 ] R I . s c [ 1 v 3 5 0 0 1 . 8 0 3 2 : v i X r a Large Language Models as Zero-Shot Conversational Recommenders Zhouhang Xieâ [email protected] University of California, San Diego La Jolla, California, USA # Zhankui Heâ [email protected] University of California, San Diego La Jolla, California, USA # USA Harald Steck [email protected] Netflix Inc. Los Gatos, California, USA Dawen Liang [email protected] Netflix Inc. Los Gatos, California, USA Yesu Feng [email protected] Netflix Inc. Los Gatos, California, USA Bodhisattwa Prasad Majumder [email protected] University of California, San Diego La Jolla, California, USA Nathan Kallus [email protected] Netflix Inc. Los Gatos, California, USA Cornell University New York, New York, USA # Julian McAuley [email protected] University of California, San Diego La Jolla, California, USA
2308.10053#1
2308.10053
[ "2302.13971" ]
2308.10053#1
Large Language Models as Zero-Shot Conversational Recommenders
ABSTRACT In this paper, we present empirical studies on conversational rec- ommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in â in-the-wildâ conversa- tional recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular dis- cussion website. This is the largest public real-world conversa- tional recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommenda- tion models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language modelsâ behaviors and the charac- teristics of the datasets, providing a holistic understanding of the modelsâ effectiveness, limitations and suggesting directions for the design of future conversational recommenders. CCS CONCEPTS â ¢ Information systems â Personalization; â ¢ Computing method- ologies â Natural language generation. # KEYWORDS conversational recommendation, large language model, datasets
2308.10053#0
2308.10053#2
2308.10053
[ "2302.13971" ]
2308.10053#2
Large Language Models as Zero-Shot Conversational Recommenders
â Both authors contributed equally to this research. Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s). CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom © 2023 Copyright held by the owner/author(s). ACM ISBN 979-8-4007-0124-5/23/10. https://doi.org/10.1145/3583780.3614949 ACM Reference Format: Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, and Julian McAuley. 2023. Large Language Models as Zero-Shot Conversational Recommenders. In Proceedings of the 32nd ACM International Conference on Information and Knowledge Management (CIKM â
2308.10053#1
2308.10053#3
2308.10053
[ "2302.13971" ]
2308.10053#3
Large Language Models as Zero-Shot Conversational Recommenders
23), October 21â 25, 2023, Birmingham, United Kingdom. ACM, New York, NY, USA, 11 pages. https://doi.org/10. 1145/3583780.3614949 1 INTRODUCTION Conversational recommender systems (CRS) aim to elicit user pref- erences and offer personalized recommendations by engaging in interactive conversations. In contrast to traditional recommenders that primarily rely on usersâ actions like clicks or purchases, CRS possesses the potential to: (1) understand not only usersâ historical actions but also usersâ (multi-turn) natural-language inputs; (2) pro- vide not only recommended items but also human-like responses for multiple purposes such as preference refinement, knowledgable discussion or recommendation justification. Towards this objec- tive, a typical conversational recommender contains two compo- nents [10, 41, 64, 74]: a generator to generate natural-language responses and a recommender to rank items to meet usersâ needs. Recently, significant advancements have shown the remarkable potential of large language models (LLMs)1, such as ChatGPT [30], in various tasks [4, 6, 51, 71]. This has captured the attention of the recommender systems community to explore the possibility of lever- aging LLMs in recommendation or more general personalization tasks [3, 27, 34, 48, 56]. Yet, current efforts generally concentrate on evaluating LLMs in traditional recommendation settings, where only usersâ past actions like clicks serve as inputs [3, 27, 34, 48]. The conversational recommendation scenario, though involving more natural language interactions, is still in its infancy [16, 63]. 1We refer to LLMs as the large-sized pre-trained language models with exceptional zero-shot abilities as defined in [71]. CIKM â
2308.10053#2
2308.10053#4
2308.10053
[ "2302.13971" ]
2308.10053#4
Large Language Models as Zero-Shot Conversational Recommenders
23, October 21â 25, 2023, Birmingham, United Kingdom He, et al. Figure 1: Large Language Models (LLMs) as Zero-Shot Conversational Recommenders (CRS).We introduce a simple prompting strategy to define the task description ð , format requirement ð ¹ and conversation context ð for a LLM, denoted as F , we then post-process the generative results into ranked item lists with processor Φ. In this work, we propose to use large language models as zero- shot conversational recommenders and then empirically study the LLMsâ [11, 30, 51, 68] recommendation abilities. Our detailed con- tributions in this study include three key aspects regarding data, evaluation, and analysis.
2308.10053#3
2308.10053#5
2308.10053
[ "2302.13971" ]
2308.10053#5
Large Language Models as Zero-Shot Conversational Recommenders
â users who like A typically also like Bâ ) to make conversational recommendations. We design several probing tasks to uncover the modelâ s workings and the characteristics of the CRS data. Addition- ally, we present empirical findings that highlight certain limitations of LLMs as zero-shot CRS, despite their effectiveness. Data. We construct Reddit-Movie, a large-scale conversational rec- ommendation dataset with over 634k naturally occurring recom- mendation seeking dialogs from users from Reddit2, a popular discussion forum. Different from existing crowd-sourced conver- sational recommendation datasets, such as ReDIAL [41] and IN- SPIRED [22], where workers role-play users and recommenders, the Reddit-Movie dataset offers a complementary perspective with conversations where users seek and offer item recommendation in the real world. To the best of our knowledge, this is the largest public conversational recommendation dataset, with 50 times more conversations than ReDIAL. We summarize the key findings of this paper as follows:
2308.10053#4
2308.10053#6
2308.10053
[ "2302.13971" ]
2308.10053#6
Large Language Models as Zero-Shot Conversational Recommenders
â ¢ CRS recommendation abilities should be reassessed by elim- inating repeated items as ground truth. â ¢ LLMs, as zero-shot conversational recommenders, demon- strate improved performance on established and new datasets over fine-tuned CRS models. â ¢ LLMs primarily use their superior content/context knowl- edge, rather than their collaborative knowledge, to make recommendations. â ¢ CRS datasets inherently contain a high level of content/context information, making CRS tasks better-suited for LLMs than traditional recommendation tasks. Evaluation. By evaluating the recommendation performance of LLMs on multiple CRS datasets, we first notice a repeated item shortcut in current CRS evaluation protocols. Specifically, there exist â repeated itemsâ in previous evaluation testing samples serving as ground-truth items, which allows the creation of a trivial baseline (e.g., copying the mentioned items from the current conversation history) that outperforms most existing models, leading to spurious conclusions regarding current CRS recommendation abilities. After removing the â repeated itemsâ in training and testing data, we re- evaluate multiple representative conversational recommendation models [10, 41, 64, 74] on ReDIAL, INSPIRED and our Reddit dataset. With this experimental setup, we empirically show that LLMs can outperform existing fine-tuned conversational recommendation models even without fine-tuning. Analysis. In light of the impressive performance of LLMs as zero- shot CRS, a fundamental question arises: What accounts for their remarkable performance? Similar to the approach taken in [53], we posit that LLMs leverage both content/context knowledge (e.g., â
2308.10053#5
2308.10053#7
2308.10053
[ "2302.13971" ]
2308.10053#7
Large Language Models as Zero-Shot Conversational Recommenders
genreâ , â actorsâ and â moodâ ) and collaborative knowledge (e.g., â ¢ LLMs suffer from limitations such as popularity bias and sensitivity to geographical regions. These findings reveal the unique importance of the superior content/context knowledge in LLMs for CRS tasks, offering great potential to LLMs as an effective approach in CRS; meanwhile, analyses must recognize the challenges in evaluation, datasets, and potential problems (e.g., debiasing) in future CRS design with LLMs. 2 LLMS AS ZERO-SHOT CRS 2.1 Task Formation Given a user set U, an item set I and a vocabulary V, a conversa- tion can be denoted as ð ¶ = (ð ¢ð ¡ , ð ð ¡ , Ið ¡ )ð ð ¡ =1. That means during the ð ¡ th turn of the conversation, a speaker ð ¢ð ¡ â U generates an utterance ð ð ¡ = (ð ¤ð )ð ð =1, which is a sequence of words ð ¤ð â V. This utterance ð ð ¡ also contains a set of mentioned items Ið ¡ â I (Ið ¡ can be an empty set if no items mentioned). Typically, there are two users in the conversation ð ¶ playing the role of seeker and recommender respectively. Let us use the 2nd conversation turn in Figure 1 as an example.
2308.10053#6
2308.10053#8
2308.10053
[ "2302.13971" ]
2308.10053#8
Large Language Models as Zero-Shot Conversational Recommenders
Here ð ¡ = 2, ð ¢ð ¡ is [System], ð ð ¡ is â You would love Terminator !â and I2 is a set containing the movie Terminator. 2https://www.reddit.com/ Large Language Models as Zero-Shot Conversational Recommenders Table 1: Dataset Statistics. We denote a subset of Reddit-Movie in 2022 as base, and the entire ten-year dataset as large. Dataset #Conv. #Turns #Users #Items INSPIRED [22] ReDIAL [41] Reddit-Moviebase Reddit-Movielarge 999 11,348 85,052 634,392 35,686 139,557 133,005 1,669,720 999 764 10,946 36,247 1,967 6,281 24,326 51,203 Following many CRS papers [10, 41, 64, 74], the recommender component of a CRS is specifically designed to optimize the follow- ing objective: during the ð th turn of a conversation, where ð ¢ð is the recommender, the recommender takes the conversational context (ð ¢ð ¡ , ð ð ¡ , Ið ¡ )ð â 1 ð ¡ =1 as its input, and generate a ranked list of items Ë Ið that best matches the ground-truth items in Ið . # 2.2 Framework Prompting. Our goal is to utilize LLMs as zero-shot conversational recommenders. Specifically, without the need for fine-tuning, we intend to prompt an LLM, denoted as F , using a task description template ð , format requirement ð ¹ , and conversational context ð before the ð th turn. This process can be formally represented as: Ë Ið = Φ (F (ð , ð ¹, ð )) .
2308.10053#7
2308.10053#9
2308.10053
[ "2302.13971" ]
2308.10053#9
Large Language Models as Zero-Shot Conversational Recommenders
To better understand this zero-shot recommender, we present an example in Figure 1 with the prompt setup in our experiments.3 Models. We consider several popular LLMs F that exhibit zero-shot prompting abilities in two groups. To try to ensure deterministic results, we set the decoding temperature to 0 for all models. â ¢ GPT-3.5-turbo [30]4 and GPT-4 [51] from OPENAI with abilities of solving many complex tasks in zero-shot set- ting [6, 51] but are closed-sourced. â ¢ BAIZE [68]5 and Vicuna [11], which are representative open-sourced LLMs fine-tuned based on LLAMA-13B [61]. Processing. We do not assess model weights or output logits from LLMs. Therefore, we apply a post-processor Φ (e.g., fuzzy matching) to convert a recommendation list in natural language to a ranked list Ë
2308.10053#8
2308.10053#10
2308.10053
[ "2302.13971" ]
2308.10053#10
Large Language Models as Zero-Shot Conversational Recommenders
Ið . The approach of generating item titles instead of ranking item IDs is referred to as a generative retrieval [7, 60] paradigm. 3 DATASET Ideally, a large-scale dataset with diverse interactions and real- world conversations is needed to evaluate modelsâ ability in conver- sational recommendation. Existing conversational recommendation datasets are usually crowd-sourced [22, 32, 41, 75] and thus only partially capture realistic conversation dynamics. For example, a crowd worker responded with "Whatever Whatever Iâ m open to any suggestion." when asked about movie preferences in ReDIAL; this happens since crowd workers often do not have a particular preference at the time of completing a task. In contrast, a real user could have a very particular need, as shown in Figure 2. 3We leave more prompting techniques such as CoT [66] in future work. 4Referred as GPT-3.5-t hereafter 5We use BAIZE-V2 in https://huggingface.co/project-baize/baize-v2-13b CIKM â
2308.10053#9
2308.10053#11
2308.10053
[ "2302.13971" ]
2308.10053#11
Large Language Models as Zero-Shot Conversational Recommenders
23, October 21â 25, 2023, Birmingham, United Kingdom MovieLens Dialog Information (Low to High) (User Previously Watched Movies): Back to the Future, Man in Black, Harry Potter, ... Items Only ReDIAL | like 2001: A Space Odyssey and Tangerine, and | watched Enter the Void last night and it was pretty good. Items & Verbal Preference S| Reddit-Movie (Ours) â Something that | can focus on but nothing too harsh. It can be strange and bizarre, but dreamy Items & visuals and movement and smooth and Complex sometimes unnatural dialogue is what gives it. Verbal Preference It's a sweet sensation. It's how | felt watching Wings of Desire, Eyes Wide Shut, Querelle, for some reason MovieLens Dialog Information (Low to High) (User Previously Watched Movies): Back to the Future, Man in Black, Harry Potter, ... Items Only ReDIAL | like 2001: A Space Odyssey and Tangerine, and | watched Enter the Void last night and it was pretty good. Items & Verbal Preference S| Reddit-Movie (Ours) â Something that | can focus on but nothing too harsh. It can be strange and bizarre, but dreamy Items & visuals and movement and smooth and Complex sometimes unnatural dialogue is what gives it. Verbal Preference It's a sweet sensation. It's how | felt watching Wings of Desire, Eyes Wide Shut, Querelle, for some reason
2308.10053#10
2308.10053#12
2308.10053
[ "2302.13971" ]
2308.10053#12
Large Language Models as Zero-Shot Conversational Recommenders
Figure 2: Typical model inputs from a traditional recommen- dation dataset (MovieLens [21]), an existing CRS dataset (Re- DIAL [41]), and our Reddit-Movie dataset. The Reddit-Movie dataset contains more information in its textual content com- pared to existing datasets where users often explicitly specify their preference. See Section 5.2 for quantitative analysis. To complement crowd-sourced CRS datasets, we present the Reddit-Movie dataset, the largest-scale conversational movie rec- ommendation dataset to date, with naturally occurring movie rec- ommendation conversations that can be used along with existing crowd-sourced datasets to provide richer perspectives for training and evaluating CRS models. In this work, we conduct our model evaluation and analysis on two commonly used crowd-sourcing datasets: ReDIAL [41] and INSPIRED [22], as well as our newly collected Reddit dataset. We show qualitative examples from the Reddit dataset as in Figure 2 and quantitative analysis in Section 5.2. Dataset Construction To construct a CRS dataset from Reddit, we process all Reddit posts from 2012 Jan to 2022 Dec from pushshift.io6. We consider movie recommendation scenarios7 and extract re- lated posts from five related subreddits: r/movies, r/bestofnetflix, r/moviesuggestions, r/netflixbestof and r/truefilm. We process the raw data with the pipeline of conversational recommendation iden- tification, movie mention recognition and movie entity linking8. In our following evaluation, we use the most recent 9k conversations in Reddit-Moviebase from December 2022 as the testing set since these samples occur after GPT-3.5-tâ s release. Meanwhile, GPT- 4 [51] also mentioned its pre-training data cut off in Sept. 20219. For other compared models, we use the remaining 76k conversations in Reddit-Moviebase dataset for training and validation. 6https://pushshift.io/ 7Other domains like songs, books can potentially be processed in a similar way 8Check our evaluation data, LLMs scripts, results and the links of Reddit-Movie datasets in https://github.com/AaronHeee/LLMs-as-Zero-Shot-Conversational-RecSys. 9We note that there is a possibility that GPT-4â
2308.10053#11
2308.10053#13
2308.10053
[ "2302.13971" ]
2308.10053#13
Large Language Models as Zero-Shot Conversational Recommenders
s newest checkpoint might include a small amount of more recent data [51]. CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom He, et al. INSPIRED ReDIAL Reddit INSPIRED ReDIAL Reddit #HIT@S Figure 3: To show the repeated item shortcut, we count CRS recommendation hits using the Top-K ranked list ð ¾ = {1, 5}. We group the ground-truth hits by repeated items (shaded bars) and new items (not shaded bars). The trivial baseline copies existing items from the current conversation history in chronological order, from the most recent and does not recommend new items. Discussion. From the statistics in Table 1, we observe: (1) The dataset Reddit-Movie stands out as the largest conversational rec- ommendation dataset, encompassing 634,392 conversations and covering 51,203 movies. (2) In comparison to ReDIAL [41] and IN- SPIRED [22], Reddit-Movie contains fewer multi-turn conversations, mainly due to the inherent characteristics of Reddit posts. (3) By ex- amining representative examples depicted in Figure 2, we find that Reddit-Movie conversations tend to include more complex and de- tailed user preference in contrast to ReDIAL, as they originate from real-world conversations on Reddit, enriching the conversational recommendation datasets with a diverse range of discussions. 4 EVALUATION In this section, we evaluate the proposed LLMs-based frameowrk on ReDIAL [41], INSPIRED [22] and our Reddit datasets. We first explain the evaluation setup and a repeated item shortcut of the previous evaluation in Sections 4.1 and 4.2. Then, we re-train models and discuss LLM performance in Section 4.3. Compared CRS Models. We consider several representative CRS models. For baselines which rely on structured knowledge, we use the entity linking results of ReDIAL and INSPIRED datasets pro- vided by UniCRS [64]. Note that we do not include more works [43, 50, 54] because UniCRS [64] is representative with similar results. ReDIAL [41]: This model is released along with the ReDIAL dataset with an auto-encoder [58]-based recommender. â ¢ KBRD [10]:
2308.10053#12
2308.10053#14
2308.10053
[ "2302.13971" ]
2308.10053#14
Large Language Models as Zero-Shot Conversational Recommenders
This model proposes to use the DBPedia [1] to enhance the semantic knowledge of items or entities. â ¢ KGSF [74]: This model incorporates two knowledge graphs to enhance the representations of words and entities, and uses the Mutual Information Maximization method to align the semantic spaces of those two knowledge graphs. â ¢ UniCRS [64]: This model uses pre-trained language model, DialoGPT [69], with prompt tuning to conduct recommen- dation and conversation generation tasks respectively. # 4.1 Evaluation Setup Repeated vs. New Items. Given a conversation ð ¶ = (ð ¢ð ¡ , ð ð ¡ , Ið ¡ )ð ð ¡ =1, it is challenging to identify the ground-truth recommended items, i.e., whether the mentioned items Ið at the ð th (ð â ¤ ð ) turn are used for recommendation purposes. A common evaluation setup assumes that when ð ¢ð is the recommender, all items ð â Ið serve as ground-truth recommended items. In this work, we further split the items ð â Ið
2308.10053#13
2308.10053#15
2308.10053
[ "2302.13971" ]
2308.10053#15
Large Language Models as Zero-Shot Conversational Recommenders
into two categories: repeated items or new items. Repeated items are items that have ap- peared in previous conversation turns, i.e., {ð | â ð ¡ â [1, ð ), ð â Ið ¡ }; and new items are items not mentioned in previous conversation turns. We explain the details of this categorization in Section 4.2. Evaluation Protocol. On those three datasets, we evaluate several representative CRS models and several LLMs on their recommen- dation abilities. For baselines, after re-running the training code provided by the authors, we report the prediction performance us- ing Recall@K [10, 41, 64, 74] (i.e., HIT@K). We consider the means and the standard errors10 of the metric with ð ¾ = {1, 5}. 4.2 Repeated Items Can Be Shortcuts Current evaluation for conversational recommendation systems does not differentiate between repeated and new items in a conver- sation. We observed that this evaluation scheme favors systems that optimize for mentioning repeated items. As shown in Figure 3, a trivial baseline that always copies seen items from the conversation history has better performance than most previous models under the standard evaluation scheme. This phenomenon highlights the risk of shortcut learning [18], where a decision rule performs well against certain benchmarks and evaluations but fails to capture the true intent of the system designer. Indeed, the #HIT@1 for the models tested dropped by more than 60% on average when we focus on new item recommendation only, which is unclear from the overall recommendation performance. After manually checking, we observe a typical pattern of repeated items, which is shown in the ex- ample conversation in Figure 1. In this conversation, Terminator at the 6th turn is used as the ground-truth item. The system re- peated this Terminator because the system quoted this movie for a content-based discussion during the conversation rather than making recommendations. Given the nature of recommendation conversations between two users, it is more probable that items repeated during a conversation are intended for discussion rather
2308.10053#14
2308.10053#16
2308.10053
[ "2302.13971" ]
2308.10053#16
Large Language Models as Zero-Shot Conversational Recommenders
10We show standard errors as error bars in our figures and gray numbers in our tables. Large Language Models as Zero-Shot Conversational Recommenders CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom INSPIRED ReDIAL Reddit 0.05 0.025 008 0.07 0.04 0.020 0s a ® os 0.03 0.015 Som | i 3 0.02 oo10 003 | Lr 0.02 I 0.01 0.005 001 ooo HIT 0.00 00 zg24eNneae Z2R2N ERE zg24eNneae agglzank agglzank agglzank 2a 208 2a 308 2a808 ge sese ge sese ge sese & & & INSPIRED ReDIAL Reddit oa 0.14 0.08 0.12 0.10 0.10 0.06 ro 0.08 00s 0.04 0.04 0.04 0.02 0.02 0.02 0.00 0.00 0.00 KBRD KGsF Unicrs GPra KBRD KGsF unicrs GPra KBRD KGsF Unicrs BAIZE Vicuna GPr3.st GPra BAIZE Vicuna BAIZE GPT-3.5-+ ReDIAL Vicuna GPT-3.5-+ ReDIAL ReDIAL © Figure 4: CRS recommendation performance on New Items in terms of Recall@K, with ð ¾ = {1, 5}. To exclude the influence of repeated items in CRS evaluation, we remove all repeated items in training and testing datasets and re-train all baselines. Table 2: Recall@1 results of considering all generated item titles (Φ0) and only considering in-dataset item titles (Φ1).
2308.10053#15
2308.10053#17
2308.10053
[ "2302.13971" ]
2308.10053#17
Large Language Models as Zero-Shot Conversational Recommenders
Model INSPIRED Φ1 Φ0 Φ0 ReDIAL Φ1 Φ0 Reddit Φ1 BAIZE Vicuna GPT-3.5-t GPT-4 .019 .019 .028 .011 .047 .015 .062 .017 .028 .011 .033 .012 .052 .015 .066 .017 .021 .002 .020 .002 .041 .003 .043 .003 .021 .002 .020 .002 .043 .003 .046 .004 .012 .001 .012 .001 .022 .001 .022 .001 .013 .008 .012 .001 .023 .001 .023 .001 Table 3: Fraction of Top-K (ð ¾ = 20 in our prompt setup) rec- ommendations (#rec) that can be string matched in the IMDB movie database (%imdb) for the different models, which shows a lower bound of non-hallucinated movie titles. BAIZE Vicuna GPT-3.5-t GPT-4 #rec %imdb #rec %imdb #rec %imdb #rec %imdb 259,333 81.56% 258,984 86.98% 321,048 95.51% 322,323 94.86% than serving as recommendations. We argue that considering the large portion of repeated items (e.g., more than 15% ground-truth items are repeated items in INSPIRED), it is beneficial to remove repeated items and re-evaluate CRS models to better understand modelsâ recommendation ability. It is worth noting that the rep- etition patterns have also been investigated in evaluating other recommender systems such as next-basket recommendation [40]. # 4.3 LLMs Performance Finding 1 - LLMs outperform fine-tuned CRS models in a zero-shot setting. For a comparison between modelsâ abilities to recommend new items to the user in conversation, we re-train exist- ing CRS models on all datasets for new item recommendation only. The evaluation results are as shown in Figure 4. Large language models, although not fine-tuned, have the best performance on all datasets. Meanwhile, the performance of all models is uniformly lower on Reddit compared to the other datasets, potentially due to the large number of items and fewer conversation turns, making recommendation more challenging.
2308.10053#16
2308.10053#18
2308.10053
[ "2302.13971" ]
2308.10053#18
Large Language Models as Zero-Shot Conversational Recommenders
finding that smaller distilled models via imitation learning cannot fully inherit larger models ability on downstream tasks [20]. Finding 3 - LLMs may generate out-of-dataset item titles, but few hallucinated recommendations. We note that language models trained on open-domain data naturally produce items out of the allowed item set during generation. In practice, removing these items improves the modelsâ recommendation performance. Large language models outperform other models (with GPT-4 being the best) consistently regardless of whether these unknown items are removed or not, as shown in Table 2. Meanwhile, Table 3 shows that around 95% generated recommendations from GPT-based models (around 81% from BAIZE and 87% from Vicuna) can be found in IMDB 11 by string matching. Those lower bounds of these matching rates indicate that there are only a few hallucinated item titles in the LLM recommendations in the movie domain. 5 DETAILED ANALYSIS Observing LLMsâ remarkable conversational recommendation per- formance for zero-shot recommendation, we are interested in what accounts for their effectiveness and what their limitations are. We aim to answer these questions from both a model and data perspective. Finding 2 - GPT-based models achieve superior performance than open-sourced LLMs. As shown in Figure 4, large language models consistently outperform other models across all three datasets, while GPT-4 is generally better than GPT-3.5-t. We hypothesize this is due to GPT-4â s larger parameter size enables it to retain more correlation information between movie names and user preferences that naturally occurs in the language modelsâ pre-training data. Vi- cuna and BAIZE, while having comparable performance to prior models on most datasets, have significantly lower performance than its teacher, GPT-3.5-t. This is consistent with previous worksâ # 5.1 Knowledge in LLMs Experiment Setup. Motivated by the probing work of [53], we posit that two types of knowledge in LLMs can be used in CRS:
2308.10053#17
2308.10053#19
2308.10053
[ "2302.13971" ]
2308.10053#19
Large Language Models as Zero-Shot Conversational Recommenders
â ¢ Collaborative knowledge, which requires the model to match items with similar ones, according to community in- teractions like â users who like A typically also like Bâ . In 11Movie titles in https://datasets.imdbws.com/. CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom He, et al. EZZ So, 9 E ] 51,9) CO $2, ES S2,%2 EQ $3,%) EY $3, 02 Vicuna INSPIRED GPT-3.5-t GPT-4 ReDIAL Reddit 0.14 0.08 iH | 4 H u 0.06 0.04 0.02 / } i } / | } A} 0.00 Lb Vicuna â GPT-3.5-t GPT-4 GPT-3.5-t GPT-4 Figure 5:
2308.10053#18
2308.10053#20
2308.10053
[ "2302.13971" ]
2308.10053#20
Large Language Models as Zero-Shot Conversational Recommenders
Ablation studies for the research question about the primary knowledge used by LLMs for CRS. Here Φ1 is the post-processor which only considers in-dataset item titles; Φ2 is the post-processor based on Φ1 and further excludes all seen items in conversational context from generated recommendation lists. For inputs like Original (ð 0) and ItemOnly (ð 1), LLMs show similar performance with Φ1 or Φ2, so we only keep Φ1 here. We consider Φ2 because ItemRemoved (ð 2) and ItemRandom (ð 3) have no information about already mentioned items, which may cause under-estimated accuracy using Φ1 compared to Original. Z So, 1 INSPIRED Gad $1,%1 ReDIAL | $2, 02 ] S3,02 Reddit os os 0.08} a Bae a on fi if 0.02 a oo LH Pala) = r 1 {0} sao] [10,+0) 4 [1,5) [1,5) aot [10,+0) [1,5) [5,10) [10,+00) Table 4: To understand the content/context knowledge in LLMs and existing CRS models, we re-train the existing CRS models using the same perturbed conversation context Item- Removed (ð 2). We include the results of the representative CRS model UniCRS (denoted as CRS*) as well as a represen- tative text-encoder BERT-small [15] (denoted as TextEnc*). INSPIRED ReDIAL Reddit Model R@1 R@5 R@1 R@5 R@1 R@5 Vicuna GPT-3.5-t GPT-4 .024 .010 .057 .016 .062 .017 .062 .017 .123 .023 .128 .023 .014 .002 .030 .003 .032 .003 .053 .003 .105 .005 .102 .005 .008 .001 .018 .001 .019 .001 .025 .001 .068 .002 .075 .002 CRS* TextEnc* .039 .011 .038 .015 .087 .014 .090 .016 .015 .002 .013 .002 .058 .003 .053 .004 .001 .000 .002 .000 .008 .001 .009 .001
2308.10053#19
2308.10053#21
2308.10053
[ "2302.13971" ]
2308.10053#21
Large Language Models as Zero-Shot Conversational Recommenders
Figure 6: GPT-3.5-t Recall@5 results grouped by the occur- rences of items in conversation context, and count the con- versations per dataset. our experiments, we define the collaborative knowledge in LLMs as the ability to make accurate recommendations using item mentions in conversational contexts. â ¢ Content/context knowledge, which requires the model to match recommended items with their content or context in- formation. In our experiments, we define the content/context knowledge in LLMs as the ability to make accurate recom- mendations based on all other conversation inputs rather than item mentions, such as contextual descriptions, mentioned genres, and director names. To understand how LLMs use these two types of knowledge, given the original conversation context ð (Example in Figure 1), we perturb ð with three different strategies as follows and subsequently re-query the LLMs. We denote the original as ð 0: S0 (Original): we use the original conversation context. â ¢ S1 (ItemOnly): we keep mentioned items and remove all natural language descriptions in the conversation context. â ¢ S2 (ItemRemoved): we remove mentioned items and keep other content in the conversation context. â ¢ S3 (ItemRandom): we replace the mentioned items in the conversation context with items that are uniformly sampled from the item set I of this dataset, to eliminate the potential influence of ð 2 on the sentence grammar structure. Finding 4 - LLMs mainly rely on content/context knowledge to make recommendations. Figure 5 shows a drop in perfor- mance for most models across various datasets when replacing the original conversation text Original (ð 0) with other texts, indicating that LLMs leverage both content/context knowledge and collabora- tive knowledge in recommendation tasks. However, the importance of these knowledge types differs. Our analysis reveals that con- tent/context knowledge is the primary knowledge utilized by LLMs in CRS.
2308.10053#20
2308.10053#22
2308.10053
[ "2302.13971" ]
2308.10053#22
Large Language Models as Zero-Shot Conversational Recommenders
When using ItemOnly (ð 1) as a replacement for Original, there is an average performance drop of more than 60% in terms of Recall@5. On the other hand, GPT-based models experience only a minor performance drop of less than 10% on average when using ItemRemoved (ð 2) or ItemRandom (ð 3) instead of Original. Al- though the smaller-sized model Vicuna shows a higher performance drop, it is still considerably milder compared to using ItemOnly. To accurately reflect the recommendation abilities of LLMs with ItemRemoved and ItemRandom, we introduce a new post-processor Large Language Models as Zero-Shot Conversational Recommenders
2308.10053#21
2308.10053#23
2308.10053
[ "2302.13971" ]
2308.10053#23
Large Language Models as Zero-Shot Conversational Recommenders
Table 5: To understand the collaborative knowledge in LLMs and existing CRS models, we re-train the existing CRS models using the same perturbed conversation context ItemOnly (ð 1). We include the results of the representative CRS model Uni- CRS (denoted as CRS*) as well as a representative item-based collaborative model FISM [31] (denoted as ItemCF*). INSPIRED ReDIAL Reddit Model R@1 R@5 R@1 R@5 R@1 R@5 Vicuna GPT-3.5-t GPT-4 .005 .005 .024 .010 .014 .008 .024 .010 .052 .015 .052 .015 .011 .002 .021 .002 .025 .002 .039 .003 .063 .004 .069 .004 .005 .000 .007 .001 .007 .001 .015 .001 .026 .001 .028 .001 CRS* ItemCF* .038 .013 .042 .012 .085 .019 .087 .016 .025 .002 .029 .003 .072 .004 .088 .004 .003 .000 .004 .001 .015 .001 .018 .001 denoted as Φ2 (describe in the caption of Figure 5). By employing Φ2, the performance gaps between Original and ItemRemoved (or ItemRandom) are further reduced. Furthermore, Figure 6 demon- strates the consistent and close performance gap between Original and ItemRemoved (or ItemRandom) across different testing samples, which vary in size and the number of item mentions in Original. These results suggest that given a conversation context, LLMs primarily rely on content/context knowledge rather than collabo- rative knowledge to make recommendations. This behavior inter- estingly diverges from many traditional recommenders like col- laborative filtering [23, 24, 36, 46, 55, 58] or sequential recom- menders [25, 33, 59, 73], where user-interacted items are essential. Finding 5 - GPT-based LLMs possess better content/context knowledge than existing CRS. From Table 4, we observe the superior recommendation performance of GPT-based LLMs against representative conversational recommendation or text-only mod- els on all datasets, showing the remarkable zero-shot abilities in understanding user preference with the textual inputs and gener- ating correct item titles.
2308.10053#22
2308.10053#24
2308.10053
[ "2302.13971" ]
2308.10053#24
Large Language Models as Zero-Shot Conversational Recommenders
We conclude that GPT-based LLMs can provide more accurate recommendations than existing trained CRS models in an ItemRemoved (ð 2) setting, demonstrating better con- tent/context knowledge. Finding 6 - LLMs generally possess weaker collaborative knowledge than existing CRS. In Table 5, the results from IN- SPIRED and ReDIAL indicate that LLMs underperform existing representative CRS or ItemCF models by 30% when using only the item-based conversation context ItemOnly (ð 1). It indicates that LLMs, trained on a general corpus, typically lack the collaborative knowledge exhibited by representative models trained on the target dataset. There are several possible reasons for this weak collabora- tive knowledge in LLMs. First, the training corpus may not contain sufficient information for LLMs to learn the underlying item sim- ilarities. Second, although LLMs may possess some collaborative knowledge, they might not align with the interactions in the target datasets, possibly because the underlying item similarities can be highly dataset- or platform-dependent. However, in the case of the Reddit dataset, LLMs outperform baselines in both Recall@1 and Recall@5, as shown in Table 5. This outcome could be attributed to the datasetâ s large number of rarely interacted items, resulting in limited collaborative information. The CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom MsMarco â â inspired â xâ DallyDialog Reddit + ReDIAL =@= HotpotQa ---- RAND ce Fr = Pr E=9 PT+FT 0.040 0.035 0.030 B00 Ti & o.020 a © cos 0.010 0005} os a0 a5 20 25 0.000 ato Token Counts 2e6 FISM (a) Entropy v.s. Token Counts (b) Pre-training Effectiveness Figure 7: The left subfigure shows the entropy of the fre- quency distribution of 1,2,3-grams with respect to number of words drawn from each dataset (item names excluded) to measure the content/context information across datasets. The right subfigure shows the results of processed Reddit collaborative dataset aligned to ML-25M [21]. RAND denotes random baseline, FT denotes fine tuning on Reddit, PT de- notes pre-training on ML-25M, PT+FT means FT after PT.
2308.10053#23
2308.10053#25
2308.10053
[ "2302.13971" ]
2308.10053#25
Large Language Models as Zero-Shot Conversational Recommenders
Reddit dataset contains 12,982 items with no more than 3 mentions as responses. This poses a challenge in correctly ranking these items within the Top-5 or even Top-1 positions. LLMs, which possess at least some understanding of the semantics in item titles, have the chance to outperform baselines trained on datasets containing a large number of cold-start items. Recent research on LLMs in traditional recommendation sys- tems [27, 34, 48] also observes the challenge of effectively leveraging collaborative information without knowing the target interaction data distribution. Additionally, another study [3] on traditional rec- ommendation systems suggests that LLMs are beneficial in a setting with many cold-start items. Our experimental results support these findings within the context of conversational recommendations. # 5.2 Information from CRS Data Experimental Setup for Finding 7. To understand LLMs in CRS tasks from the data perspective, we first measure the content/context information in CRS datasets. Content/context information refers to the amount of information contained in conversations, exclud- ing the item titles, which reasonably challenges existing CRS and favors LLMs according to the findings in Section 5.1. Specifically, we conduct an entropy-based evaluation for each CRS dataset and compare the conversational datasets with several popular conver- sation and question-answering datasets, namely DailyDialog (chit chat) [45], MsMarco (conversational search) [2], and HotpotQA (question answering).
2308.10053#24
2308.10053#26
2308.10053
[ "2302.13971" ]
2308.10053#26
Large Language Models as Zero-Shot Conversational Recommenders
We use ItemRemoved (ð 2) conversation texts like Section 5.1, and adopt the geometric mean of the entropy distri- bution of 1,2,3-grams as a surrogate for the amount of information contained in the datasets, following previous work on evaluating information content in text [29]. However, entropy naturally grows with the size of a corpus, and each CRS dataset has a different distri- bution of words per sentence, sentences per dialog, and corpus size. Thus, it would be unfair to compare entropy between corpus on a CIKM â
2308.10053#25
2308.10053#27
2308.10053
[ "2302.13971" ]
2308.10053#27
Large Language Models as Zero-Shot Conversational Recommenders
23, October 21â 25, 2023, Birmingham, United Kingdom ReDIAL Reddit © The Hangover © Inception 6.0 2.0 Fay g v S 5.0 The Shawshank x © ,The Shawshank ° - 5 Redemption S15] © Se Redemption 5 4.0 s is 8 3 3 = 2 2 3.0 Avengers: @10 ° â E Infinity War £ § 2.0 ° S 2 gos 1.0 ° 0.0 0.0 0.0 1.0 2.0 0.0 01 0.2 0.3 Ground Truth (%) Ground Truth (%) Figure 8: Scatter plots of the frequency of LLMs (GPT-4) gen- erated recommendations and ground-truth items. per-dialog, per-turn, or per-dataset basis. To ensure a fair compari- son, we repeatedly draw increasingly large subsets of texts from each of the datasets, compute the entropy of these subsets, and report the trend of entropy growth with respect to the size of the subsampled text for each CRS dataset. Finding 7 - Reddit provides more content/context informa- tion than the other two CRS datasets. Based on the results in Figure 7a, we observe that the Reddit dataset has the most con- tent/context information among the three conversational recom- mendation datasets. Those observations are also aligned with the results in Figure 5 and table 4, where LLMs â which possess better content/context knowledge than baselines â can achieve higher relative improvements compared to the other two datasets. Mean- while, the content/context information in Reddit is close to question answering and conversational search, which is higher than existing conversational recommendation and chit-chat datasets. Finding 8 - Collaborative information is insufficient for satis- factory recommendations, given the current models. Quantify- ing the collaborative information in datasets is challenging. Instead of proposing methods to measure collaborative information, we aim to make new observations based on general performance re- sults presented in Figure 4 and recommendation results using only collaborative information in Table 5. Comparing the performance of the best models in Table 5 under an ItemOnly (ð
2308.10053#26
2308.10053#28
2308.10053
[ "2302.13971" ]
2308.10053#28
Large Language Models as Zero-Shot Conversational Recommenders
1) setting with the performance of the best models in Figure 4 under an Original (ð 0) setting reveals a significant disparity. For instance, on ReDIAL, the Recall@1 performance is 0.029 for ItemCF* compared to 0.046 for GPT-4, representing a 39.96% decrease. Similarly, for Reddit, the Recall@1 performance is 0.007 compared to 0.023 for GPT-4 both, which is 69.57% lower. We also experimented with other rec- ommender systems, such as transformer-based models [33, 59] to encode the item-only inputs and found similar results. Based on the current performance gap, we find that using the existing mod- els, relying solely on collaborative information, is insufficient to provide satisfactory recommendations. We speculate that either (1) more advanced models or training methods are required to bet- ter comprehend the collaborative information in CRS datasets, or (2) the collaborative information in CRS datasets is too limited to support satisfactory recommendations. He, et al. Ground-Truth Freq By Country Recall@1 with GPT-4 UK US Nhl mat JPN RUSH] USA. mat. RUS USA. AUS CAN ESP FRA. DEU 24223 G re zoue a Figure 9:
2308.10053#27
2308.10053#29
2308.10053
[ "2302.13971" ]
2308.10053#29
Large Language Models as Zero-Shot Conversational Recommenders
Ground-truth item counts in Reddit by country (in log scale) and the corresponding Recall@1 by country. Experimental Setup for Finding 9. To understand whether the collaborative information from CRS datasets are aligned with pure interaction datasets, we conduct an experiment on the Reddit dataset. In this experiment, we first process the dataset to link the items to a popular interaction dataset ML-25M [21] 12. We then experi- ment with two representative encoders for item-based collaborative filtering based on FISM [31] and Transformer [59] (TRM), respec- tively. We report the testing results on Reddit, with fine-tuning on Reddit (FT), pre-training on ML-25M (PT), and pre-training on ML- 25M then fine-tuning Reddit (PT+FT). Note that since it is a linked dataset with additional processing, the results are not comparable with beforementioned results on Reddit. Finding 9 - Collaborative information can be dataset- or platform-dependent. From Figure 7b shows that the models solely pre-trained on ML-25M (PT) outperform a random baseline, indi- cating that the data in CRS may share item similarities with pure interaction data from another platform to some extent. However, Figure 7b also shows a notable performance gap between PT and fine-tuning on Reddit (FT). Additionally, we do not observe further performance improvement when pre-training on ML-25M then fine-tuning on Reddit (PT+FT). These observations indicate that the collaborative information and underlying item similarities, even when utilizing the same items, can be largely influenced by the specific dataset or platform. The finding also may partially explain the inferior zero-shot recommendation performance of LLMs in Ta- ble 5 and suggest the necessity of further checking the alignment of collaborative knowledge in LLMs with the target datasets. # 5.3 Limitations of LLMs as Zero-shot CRS Finding 10 - LLM recommendations suffer from popularity bias in CRS. Popularity bias refers to a phenomenon that popular items are recommended even more frequently than their popularity would warrant [8]. Figure 8 shows the popularity bias in LLM recommendations, though it may not be biased to the popular items in the target datasets. On ReDIAL, the most popular movies such as Avengers:
2308.10053#28
2308.10053#30
2308.10053
[ "2302.13971" ]
2308.10053#30
Large Language Models as Zero-Shot Conversational Recommenders
Infinity War appear around 2% of the time over all ground-truth items; On Reddit, the most popular movies such as Everything Everywhere All at Once appears less than 0.3% of the time over ground-truth items. But for the generated recommendations from GPT-4 (other LLMs share a similar trend), 12We only use items that can be linked to ML-25M in this experiment. Here 63.32% items are linked using the links.csv file from ML-25M. Large Language Models as Zero-Shot Conversational Recommenders the most popular items such as The Shawshank Redemption appear around 5% times on ReDIAL and around 1.5% times on Reddit. Compared to the target datasets, LLMs recommendations are more concentrated on popular items, which may cause further issues like the bias amplification loop [8]. Moreover, the recommended popular items are similar across different datasets, which may reflect the item popularity in the pre-training corpus of LLMs. Finding 11 - Recommendation performance of LLMs is sensi- tive to geographical regions. Despite the effectiveness in general, it is unclear whether LLMs can be good recommenders across vari- ous cultures and regions. Specifically, pre-trained language modelsâ strong open-domain ability can be attributed to pre-training from massive data [5]. But it also leads to LLMsâ sensitivity to data distri- bution. To investigate LLMs recommendation abilities for various regions, we take test instances from the Reddit dataset and obtain the production region of 7,476 movies from a publicly available movie dataset 13 by exact title matching, then report the Recall@1 for the linked movies grouped by region. We only report regions with more than 300 data points available to ensure enough data to support the result.
2308.10053#29
2308.10053#31
2308.10053
[ "2302.13971" ]
2308.10053#31
Large Language Models as Zero-Shot Conversational Recommenders
As shown in Figure 9 the current best model, GPT-4â s performance on recommendation is higher for movies pro- duced in English-speaking regions. This could be due to bias in the training data - the left of Figure 9 show item on Reddit forums are dominated by movies from English-speaking regions. Such a result highlights large language modelâ s recommendation performance varies by region and culture and demonstrates the importance of cross-regional analysis and evaluation for language model-based conversational recommendation models. # 6 RELATED WORK Conversational Recommendation. Conversational recommender systems (CRS) aim to understand user preferences and provide per- sonalized recommendations through conversations. Typical tradi- tional CRS setups include template-based CRS [13, 26, 37, 38, 70] and critiquing-based CRS [9, 42, 67]. More recently, as natural lan- guage processing has advanced, the community developed "deep" CRS [10, 41, 64] that support interactions in natural language. Aside from collaborative filtering signals, prior work shows that CRS models benefit from various additional information. Examples in- clude knowledge-enhanced models [10, 74] that make use of ex- ternal knowledge bases [1, 47], review-aware models [49], and session/sequence-based models [43, 76]. Presently, UniCRS [64], a model built on DialoGPT [69] with prompt tuning [4], stands as the state-of-the-art approach on CRS datasets such as ReDIAL [41] and INSPIRED [22]. Currently, by leveraging LLMs, [16] proposes a new CRS pipeline but does not provide quantitative results, and [63] proposes better user simulators to improve evaluation strategies in LLMs. Unlike those papers, we uncover a repeated item shortcut in the previous evaluation protocol, and propose a framework where LLMs serve as zero-shot CRS with detailed analyses to support our findings from both model and data perspectives. Large Language Models. Advances in natural language process- ing (NLP) show that large language models (LLMs) exhibit strong 13https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset CIKM â
2308.10053#30
2308.10053#32
2308.10053
[ "2302.13971" ]
2308.10053#32
Large Language Models as Zero-Shot Conversational Recommenders
23, October 21â 25, 2023, Birmingham, United Kingdom generalization ability towards unseen tasks and domains [5, 12, 65]. In particular, existing work reveals language modelsâ performance and sample efficiency on downstream tasks can be improved sim- ply through scaling up their parameter sizes [35]. Meanwhile, lan- guage models could further generalize to a wide range of unseen tasks by instruction tuning, learning to follow task instructions in natural language [52, 57]. Following these advances, many works successfully deploy large language models to a wide range of down- stream tasks such as question answering, numerical reasoning, code generation, and commonsense reasoning without any gradient up- dates [5, 35, 44, 72]. Recently, there have been various attempts by the recommendation community to leverage large language mod- els for recommendation, this includes both adapting architectures used by large language models [14, 19] and repurposing existing LLMs for recommendation [39, 48, 62]. However, to our best knowl- edge, we are the first work that provides a systematic quantitative analysis of LLMsâ ability on conversational recommendation. 7 CONCLUSION AND DISCUSSION We investigate Large Language Models (LLMs) as zero-shot Conver- sational Recommendation Systems (CRS). Through our empirical investigation, we initially address a repetition shortcut in previous standard CRS evaluations, which can potentially lead to unreliable conclusions regarding model design. Subsequently, we demonstrate that LLMs as zero-shot CRS surpass all fine-tuned existing CRS mod- els in our experiments. Inspired by their effectiveness, we conduct a comprehensive analysis from both the model and data perspectives to gain insights into the working mechanisms of LLMs, the charac- teristics of typical CRS tasks, and the limitations of using LLMs as CRS directly. Our experimental evaluations encompass two publicly available datasets, supplemented by our newly-created dataset on movie recommendations collected by scraping a popular discussion website. This dataset is the largest public CRS dataset and ensures more diverse and realistic conversations for CRS research. We also discuss the future directions based on our findings in this section.
2308.10053#31
2308.10053#33
2308.10053
[ "2302.13971" ]
2308.10053#33
Large Language Models as Zero-Shot Conversational Recommenders
On LLMs. Given the remarkable performance even without fine- tuning, LLMs hold great promise as an effective approach for CRS tasks by offering superior content/contextual knowledge. The en- couraging performance from the open-sourced LLMs [11, 68] also opens up the opportunities to further improve CRS performance via efficient tuning [3, 28] and collaborative filtering [36] ensembling. Meanwhile, many conventional tasks, such as debiasing [8] and trustworthy [17] need to be revisited in the context of LLMs. On CRS. Our findings suggest the systematic re-benchmarking of more CRS models to understand their recommendation abilities and the characteristics of CRS tasks comprehensively. Gaining a deeper understanding of CRS tasks also requires new datasets from diverse sources e.g., crowd-sourcing platforms [22, 41], discussion forums, and realistic CRS applications with various domains, languages, and cultures. Meanwhile, our analysis of the information types uncovers the unique importance of the superior content/context knowledge in LLMs for CRS tasks; this distinction also sets CRS tasks apart from traditional recommendation settings and urges us to explore the interconnections between CRS tasks and traditional recommendation [21] or conversational search [2] tasks. CIKM â
2308.10053#32
2308.10053#34
2308.10053
[ "2302.13971" ]
2308.10053#34
Large Language Models as Zero-Shot Conversational Recommenders
23, October 21â 25, 2023, Birmingham, United Kingdom REFERENCES [1] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007+ ASWC 2007, Busan, Korea, November 11-15, 2007. Proceedings. Springer, 722â 735.
2308.10053#33
2308.10053#35
2308.10053
[ "2302.13971" ]
2308.10053#35
Large Language Models as Zero-Shot Conversational Recommenders
[2] Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268 [cs.CL] [3] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. arXiv preprint arXiv:2305.00447 (2023). [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877â
2308.10053#34
2308.10053#36
2308.10053
[ "2302.13971" ]
2308.10053#36
Large Language Models as Zero-Shot Conversational Recommenders
1901. [5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020.
2308.10053#35
2308.10053#37
2308.10053
[ "2302.13971" ]
2308.10053#37
Large Language Models as Zero-Shot Conversational Recommenders
Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ran- zato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877â 1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf [6] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023). [7] Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autore- gressive Entity Retrieval. In International Conference on Learning Representations. https://openreview.net/forum?id=5k8F6UU39V [8] Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1â 39. [9] Li Chen and Pearl Pu. 2012. Critiquing-based recommenders: survey and emerg- ing trends. User Modeling and User-Adapted Interaction 22 (2012), 125â 150. [10] Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards Knowledge-Based Recommender Dialog System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 1803â
2308.10053#36
2308.10053#38
2308.10053
[ "2302.13971" ]
2308.10053#38
Large Language Models as Zero-Shot Conversational Recommenders
1813. [11] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/
2308.10053#37
2308.10053#39
2308.10053
[ "2302.13971" ]
2308.10053#39
Large Language Models as Zero-Shot Conversational Recommenders
[12] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Se- bastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcà a, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ip- polito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanu- malayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Dà az, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. ArXiv abs/2204.02311 (2022). [13] Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards conversational recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 815â 824. [14] Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022.
2308.10053#38
2308.10053#40
2308.10053
[ "2302.13971" ]
2308.10053#40
Large Language Models as Zero-Shot Conversational Recommenders
M6- Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems. arXiv:2205.08084 [cs.IR] [15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171â 4186. He, et al. [16] Luke Friedman, Sameer Ahuja, David Allen, Terry Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, et al. 2023. Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023). [17] Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. 2022. A survey on trustworthy recommender systems. arXiv preprint arXiv:2207.12515 (2022). [18] Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix Wichmann. 2020.
2308.10053#39
2308.10053#41
2308.10053
[ "2302.13971" ]
2308.10053#41
Large Language Models as Zero-Shot Conversational Recommenders
Shortcut learning in deep neural networks. Nature Machine Intelligence 2 (2020), 665 â 673. [19] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). In RecSys â 22: Sixteenth ACM Conference on Recommender Systems, Seattle, WA, USA, September 18 - 23, 2022, Jennifer Golbeck, F.
2308.10053#40
2308.10053#42
2308.10053
[ "2302.13971" ]
2308.10053#42
Large Language Models as Zero-Shot Conversational Recommenders
Maxwell Harper, Vanessa Murdock, Michael D. Ekstrand, Bracha Shapira, Justin Basilico, Keld T. Lundgaard, and Even Oldridge (Eds.). ACM, 299â 315. [20] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The False Promise of Imitating Proprietary LLMs. arXiv:2305.15717 [cs.CL] [21] F. Maxwell Harper and Joseph A. Konstan. 2016.
2308.10053#41
2308.10053#43
2308.10053
[ "2302.13971" ]
2308.10053#43
Large Language Models as Zero-Shot Conversational Recommenders
The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 5 (2016), 19:1â 19:19. [22] Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. INSPIRED: Toward Sociable Recommendation Dialog Systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 8142â 8152. [23] Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018.
2308.10053#42
2308.10053#44
2308.10053
[ "2302.13971" ]
2308.10053#44
Large Language Models as Zero-Shot Conversational Recommenders
Adversarial personalized ranking for recommendation. In The 41st International ACM SIGIR conference on research & development in information retrieval. 355â 364. [24] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173â 182. [25] Zhankui He, Handong Zhao, Zhe Lin, Zhaowen Wang, Ajinkya Kale, and Julian McAuley. 2021. Locker: Locally constrained self-attentive sequential recommen- dation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3088â 3092. [26] Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, and Julian McAuley. 2022. Bundle MCR: Towards Conversational Bundle Recommendation. In Pro- ceedings of the 16th ACM Conference on Recommender Systems. 288â 298. [27] Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large Language Models are Zero-Shot Rankers for Recommender Systems. arXiv preprint arXiv:2305.08845 (2023). [28] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021.
2308.10053#43
2308.10053#45
2308.10053
[ "2302.13971" ]
2308.10053#45
Large Language Models as Zero-Shot Conversational Recommenders
Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021). [29] Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg- Kirkpatrick. 2018. Learning to Generate Move-by-Move Commentary for Chess Games from Large-Scale Social Forum Data. In The 56th Annual Meeting of the Association for Computational Linguistics (ACL).
2308.10053#44
2308.10053#46
2308.10053
[ "2302.13971" ]
2308.10053#46
Large Language Models as Zero-Shot Conversational Recommenders
Melbourne, Australia. [30] C Kim Jacob Hilton Jacob Menick Jiayi Weng Juan Felipe Ceron Uribe Liam Fedus Luke Metz Michael Pokorny Rapha Gontijo Lopes Sengjia Zhao John Schulman, Barret Zoph. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI (2022). [31] Santosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item simi- larity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 659â 667. [32] Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul A Crook, Y-Lan Boureau, and Jason Weston. 2019. Recommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 1951â
2308.10053#45
2308.10053#47
2308.10053
[ "2302.13971" ]
2308.10053#47
Large Language Models as Zero-Shot Conversational Recommenders
1961. [33] Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recom- mendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197â 206. [34] Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023).
2308.10053#46
2308.10053#48
2308.10053
[ "2302.13971" ]
2308.10053#48
Large Language Models as Zero-Shot Conversational Recommenders
[35] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. ArXiv abs/2001.08361 (2020). [36] Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization tech- niques for recommender systems. Computer 42, 8 (2009), 30â 37. [37] Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min- Yen Kan, and Tat-Seng Chua. 2020. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining. 304â 312.
2308.10053#47
2308.10053#49
2308.10053
[ "2302.13971" ]
2308.10053#49
Large Language Models as Zero-Shot Conversational Recommenders
Large Language Models as Zero-Shot Conversational Recommenders [38] Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020. Interactive path reasoning on graph for conver- sational recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2073â 2083. [39] Jinming Li, Wentao Zhang, Tian Wang, Guanglei Xiong, Alan Lu, and Gerard Medioni. 2023. GPT4Rec: A Generative Framework for Personalized Recommen- dation and User Interests Interpretation. arXiv:2304.03879 [cs.IR] [40] Ming Li, Sami Jullien, Mozhdeh Ariannezhad, and Maarten de Rijke. 2023. A next basket recommendation reality check. ACM Transactions on Information Systems 41, 4 (2023), 1â 29. [41] Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. Advances in neural information processing systems 31 (2018). [42] Shuyang Li, Bodhisattwa Prasad Majumder, and Julian McAuley. 2021. Self- Supervised Bot Play for Conversational Recommendation with Justifications. arXiv preprint arXiv:2112.05197 (2021). [43] Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, and Qing He. 2022. User-centric conversational recommendation with multi-aspect user modeling. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 223â 233. [44] Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom, Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de, Masson dâ
2308.10053#48
2308.10053#50
2308.10053
[ "2302.13971" ]
2308.10053#50
Large Language Models as Zero-Shot Conversational Recommenders
Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey, Cherepanov, James Molloy, Daniel Jaymin Mankowitz, Esme Sutherland Robson, Push- meet Kohli, Nando de, Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with AlphaCode. Science 378 (2022), 1092 â 1097. [45] Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Asian Federation of Natural Language Processing, Taipei, Taiwan, 986â 995. https://aclanthology.org/I17-1099 [46] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference. 689â 698. [47] Hugo Liu and Push Singh. 2004.
2308.10053#49
2308.10053#51
2308.10053
[ "2302.13971" ]
2308.10053#51
Large Language Models as Zero-Shot Conversational Recommenders
ConceptNetâ a practical commonsense reasoning tool-kit. BT technology journal 22, 4 (2004), 211â 226. [48] Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is ChatGPT a Good Recommender? A Preliminary Study. arXiv:2304.10149 [cs.IR] [49] Yu Lu, Junwei Bao, Yan Song, Zichen Ma, Shuguang Cui, Youzheng Wu, and Xiaodong He. 2021. RevCore: Review-Augmented Conversational Recommen- dation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 1161â 1173. [50] Wenchang Ma, Ryuichi Takanobu, and Minlie Huang. 2021. CR-Walker: Tree- Structured Graph Reasoning and Dialog Acts for Conversational Recommen- dation. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computational Linguistics. https: //aclanthology.org/2021.emnlp-main.139 [51] OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] [52] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022.
2308.10053#50
2308.10053#52
2308.10053
[ "2302.13971" ]
2308.10053#52
Large Language Models as Zero-Shot Conversational Recommenders
Training language models to follow instructions with hu- man feedback. In NeurIPS. http://papers.nips.cc/paper_files/paper/2022/hash/ b1efde53be364a73914f58805a001731-Abstract-Conference.html [53] Gustavo Penha and Claudia Hauff. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceed- ings of the 14th ACM Conference on Recommender Systems. 388â 397. [54] Zhaochun Ren, Zhi Tian, Dongdong Li, Pengjie Ren, Liu Yang, Xin Xin, Huasheng Liang, Maarten de Rijke, and Zhumin Chen. 2022. Variational Reasoning about User Preferences for Conversational Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 165â 175. [55] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995â 1000.
2308.10053#51
2308.10053#53
2308.10053
[ "2302.13971" ]
2308.10053#53
Large Language Models as Zero-Shot Conversational Recommenders
[56] Alireza Salemi, Sheshera Mysore, Michael Bendersky, and Hamed Zamani. 2023. LaMP: When Large Language Models Meet Personalization. arXiv preprint arXiv:2304.11406 (2023). [57] Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M.
2308.10053#52
2308.10053#54
2308.10053
[ "2302.13971" ]
2308.10053#54
Large Language Models as Zero-Shot Conversational Recommenders
Rush. CIKM â 23, October 21â 25, 2023, Birmingham, United Kingdom 2022. Multitask Prompted Training Enables Zero-Shot Task Generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. https://openreview.net/forum?id= 9Vrb9D0WI4 [58] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. 2015. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th international conference on World Wide Web. 111â 112. [59] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder rep- resentations from transformer.
2308.10053#53
2308.10053#55
2308.10053
[ "2302.13971" ]
2308.10053#55
Large Language Models as Zero-Shot Conversational Recommenders
In Proceedings of the 28th ACM international conference on information and knowledge management. 1441â 1450. [60] Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems 35 (2022), 21831â 21843. [61] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [62] Wenjie Wang, Xinyu Lin, Fuli Feng, Xiangnan He, and Tat-Seng Chua. 2023. Generative Recommendation: Towards Next-generation Recommender Paradigm. arXiv:2304.03516 [cs.IR] [63] Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. 2023. Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models. arXiv preprint arXiv:2305.13112 (2023). [64] Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1929â 1937. [65] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022.
2308.10053#54
2308.10053#56
2308.10053
[ "2302.13971" ]
2308.10053#56
Large Language Models as Zero-Shot Conversational Recommenders
Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agar- wal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 24824â 24837. https://proceedings.neurips.cc/paper_files/paper/2022/file/ 9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf [66] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824â 24837. [67] Ga Wu, Kai Luo, Scott Sanner, and Harold Soh. 2019. Deep language-based critiquing for recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems. 137â 145. [68] Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An open- source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196 (2023). [69] Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. 2020. DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics:
2308.10053#55
2308.10053#57
2308.10053
[ "2302.13971" ]
2308.10053#57
Large Language Models as Zero-Shot Conversational Recommenders
System Demonstrations. 270â 278. [70] Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Bo Long, and Jian Pei. 2022. Multiple Choice Questions based Multi-Interest Policy Learning for Conversational Recommendation. In Proceedings of the ACM Web Conference 2022. 2153â 2162. [71] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023). [72] Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. 2023. CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evalua- tions on HumanEval-X. arXiv:2303.17568 [cs.LG] [73] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-rec: Self-supervised learning for se- quential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management. 1893â
2308.10053#56
2308.10053#58
2308.10053
[ "2302.13971" ]
2308.10053#58
Large Language Models as Zero-Shot Conversational Recommenders
1902. [74] Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving conversational recommender systems via knowl- edge graph based semantic fusion. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1006â 1014. [75] Kun Zhou, Yuanhang Zhou, Wayne Xin Zhao, Xiaoke Wang, and Ji-Rong Wen. 2020. Towards Topic-Guided Conversational Recommender System. In Proceed- ings of the 28th International Conference on Computational Linguistics. 4128â 4139. [76] Jie Zou, Evangelos Kanoulas, Pengjie Ren, Zhaochun Ren, Aixin Sun, and Cheng Long. 2022. Improving conversational recommender systems via transformer- based sequential modelling. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2319â 2324.
2308.10053#57
2308.10053
[ "2302.13971" ]
2308.09904#0
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
3 2 0 2 # t c O 7 1 ] R I . s c [ 2 v 4 0 9 9 0 . 8 0 3 2 : v i X r a RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents Yubo Shu Haonan Zhang Hansu Gu School of Computer Science, Fudan School of Computer Science, Fudan Seattle University University United States Shanghai, China Shanghai, China [email protected] [email protected] [email protected] & Peng Zhangâ Shanghai Key Laboratory of Data Science, Fudan University Shanghai, China [email protected] # Tun Luâ School of Computer Science, Fudan University Shanghai, China [email protected] Dongsheng Li Microsoft Research Asia Shanghai, China [email protected] Ning Gu School of Computer Science, Fudan University Shanghai, China [email protected]
2308.09904#1
2308.09904
[ "2305.07961" ]
2308.09904#1
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
ABSTRACT The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human- Computer Interaction (HCI) by tailoring content based on indi- vidual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, ad- dressing biases while preserving user privacy, and solving cold- start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender sys- temsâ responsibility, and a human-centered approach is vital. We introduce the RAH (Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Us- ing the real-world data, our experiments demonstrate the RAH frameworkâ s efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered rec- ommendation framework that partners effectively with various recommendation models.
2308.09904#0
2308.09904#2
2308.09904
[ "2305.07961" ]
2308.09904#2
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
â Corresponding author. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].
2308.09904#1
2308.09904#3
2308.09904
[ "2305.07961" ]
2308.09904#3
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
XXXâ 24, 2024, Singapore © 2024 Association for Computing Machinery. ACM ISBN 978-1-4503-XXXX-X/18/06. . . $15.00 https://doi.org/XXXXXXX.XXXXXXX 1 INTRODUCTION Recommender systems hold a pivotal role in Human-Computer Interaction (HCI) by personalizing content and services to individ- ual preferences, thereby enriching user experience and aiding in decision-making [29]. They efficiently filter information, effectively managing overload and assisting users in locating relevant content.
2308.09904#2
2308.09904#4
2308.09904
[ "2305.07961" ]
2308.09904#4
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
However, there remain notable challenges. Striking the delicate balance between recommendation accuracy and user satisfaction is a fundamental objective [12, 20]. Addressing biases in recommenda- tions [4] and empowering users with control while preserving their privacy remains a pressing concern [8]. Additionally, simplifying transitions into new domains and alleviating user burden stand as ongoing challenges [41], typically revealing themselves as a cold start problem. While much of the pioneering research primarily focuses on addressing challenges from the perspective of the recommender system, we argue that solving these issues is not the sole respon- sibility of recommender systems. Addressing challenges from the human perspective presents a new and promising angle. For in- stance, employing advanced user modeling techniques to capture user behavior and preferences allows for a delicate balance between user satisfaction and recommendation precision. Engaging users in a cooperative manner within the recommendation process enables them to define profiles, tailor preferences, and provide explicit feed- back. This not only helps mitigate biases but also empowers users, enhancing their control over recommendations and protecting pri- vacy. When confronted with the cold-start challenge, understanding user preferences and effectively generalizing them in uncharted domains can significantly alleviate the burden on users entering unfamiliar territories. These human-centered strategies represent orthogonal efforts to complement existing recommender systems.
2308.09904#3
2308.09904#5
2308.09904
[ "2305.07961" ]
2308.09904#5
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
XXXâ 24, 2024, Singapore We propose a comprehensive framework RAH, which stands for Recommender system, Assistant, and Human. Within this frame- work, the assistant acts as an intelligent and personalized helper, leveraging LLM to learn and comprehend a userâ s personality from their behaviors. The assistant then provides tailored actions in line with the userâ s personality. Operating within this framework, RAH opens up avenues to alleviate user burden, mitigate biases, and enhance user control over recommended outcomes and personal privacy. Each assistant comprises several LLM-based agents. (1) Perceive Agent: Understands and interprets information within recommendations, including item features and user feedback impli- cations. (2) Learn Agent: Assimilates user personalities from their behaviors and stores them in personality libraries. (3) Act Agent: Ex- ecutes actions based on the learned personality, such as filtering out disliked items for the user. (4) Critic Agent: Validates if the executed action aligns with the userâ s preferences and analyzes adjustments to reduce discrepancies. (5) Reflect Agent: Scrutinizes and optimizes the accumulated learned personality, addressing issues like duplica- tion and conflicts. Furthermore, we enhance our proposed assistant with the Learn-Act-Critic loop and a reflection mechanism to en- hance alignment with the user. Within the Learn-Act-Critic loop, the Learn, Act, and Critic Agents work collaboratively to process user actions, refining their understanding of the userâ
2308.09904#4
2308.09904#6
2308.09904
[ "2305.07961" ]
2308.09904#6
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s personality. This iterative loop continues until the Act Agent accurately mirrors the learned personality, ensuring alignment with user interactions validated by the Critic Agent. Meanwhile, the reflection mecha- nism employs the Reflect Agent to periodically revise the learned personality, maintaining an up-to-date and accurate representation. In our experiment, we evaluate the RAH framework using real- world data in three recommendation domains. Firstly, we observe that the Learn-Act-Critic loop and reflection mechanism signifi- cantly enhance the alignment of the assistant with the userâ
2308.09904#5
2308.09904#7
2308.09904
[ "2305.07961" ]
2308.09904#7
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s person- ality. Post-learning from users, the assistant is capable of generating proxy actions across various recommender systems, effectively re- ducing human burden. The second experiment demonstrates that these proxy actions lead to a notable improvement in recommender systems, achieving enhanced efficiency with reduced user inter- actions. Moreover, in the third part of the experiment, we investi- gate the use of well-learned assistants to express usersâ feedback on less popular items, mitigating bias within the system. Finally, we delve into additional strategies within the RAH framework to tackle human-centered concerns regarding user control. The assistant comprehends usersâ intentions, delivers more detailed rec- ommended results to fulfill them, and implements control strategies to safeguard usersâ privacy. Our contributions can be summarized as follows: We utilize LLM from the human perspective and propose a more human-centered recommendation framework, RAH. â ¢ Within the RAH framework, our assistant is designed with the Learn-Act-Critic loop and a reflection mechanism to achieve a nuanced understanding and alignment with user personalities. â ¢ Through experimentation, we validate the RAH frameworkâ s performance in addressing recommendation challenges part- nered with various recommendation models, including cold- start in cross-domain recommendation, popularity bias, and user control and privacy. Yubo Shu, et al. 2 RAH (RECSYS-ASSISTANT-HUMAN) 2.1 Overall The principle behind RAHâ s design is taking a human-centered approach to address recommender system challenges. As shown in Figure 1, RAH comprises three components - the recommender system, the intelligent assistant, and the human user. Unlike tradi- tional recommendations solely between systems and users, RAH introduces an assistant as an intermediary. This assistant acts as a personalized helper for the user. It utilizes large language models (LLMs) to comprehend user personalities based on their behav- iors. The assistant then provides actions tailored to each userâ s personality. Within this framework, the assistant facilitates two key work- flows: RecSysâ Assistantâ Human This workflow focuses on the assistant filtering personalized recommendations for the end user, as shown by the solid black arrow in Figure 1. Recommender systems initially generate candidate items spanning different domains such as books, movies, and games. â ¢ The assistant aggregates these cross-domain recommenda- tions. It retrieves the userâ
2308.09904#6
2308.09904#8
2308.09904
[ "2305.07961" ]
2308.09904#8
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
s learned personality from its memory. Using the userâ s personality profile, the assistant further filters the candidate items to create a tailored list. â ¢ Finally, the user receives a unified personalized set of filtered recommendations from the assistant. To enable effective filtering across diverse items, the assistant incorporates powerful LLMs. They provide the reasoning skills and real-world knowledge needed to comprehend various item features. Humanâ Assistantâ RecSys This workflow enables the assis- tant to learn from user feedback and accordingly tune recommender systems, as depicted by the dotted black arrow in Figure 1. â ¢ The user first provides feedback on items, e.g., indicating â Like" or â Dislike", and the assistant receives this initial feed- back instead of the recommender systems. â ¢ The assistant will then start to learn the userâ s personality from the userâ s feedback. â ¢ Lastly, the assistant will process the userâ s feedback into the assistantâ s feedback. This allows it to selectively forward user preferences to recommender systems. By introducing an intermediary assistant focused on the human, RAH opens up new possibilities to address human-centered chal- lenges. The assistantâ s capabilities in learning and acting upon user personalities strengthen these human-centered aspects. It facili- tates key functionalities like mitigating user burden and bias while enhancing user control and privacy. 2.2 Human-Centered Design Goals As stated earlier, the key goal of RAH is to address human-centered challenges in recommender systems. This subsection introduces three pivotal design goals for addressing human-centered chal- lenges. (Our methods to achieve the design goals can be found in Section 3.3) Reduce User Burden. In recommendation, the user burden can come from the initial interactions in a new domain and the redundant feedback across domains. In the RAH framework, the RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXXâ
2308.09904#7
2308.09904#9
2308.09904
[ "2305.07961" ]
2308.09904#9
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
24, 2024, Singapore 5 Recommended Ree system 2 Ttems Ree system 1 Assistant Feedback Personality Library Prefer Value Filtered Items ee Human Feedback Goal Figure 1: The figure demonstrates an overall view of the RAH framework. Core workflows can be divided into RecSysâ Assistantâ Human(the black solid arrow) and Humanâ Assistantâ RecSys(the black dotted arrow). assistant should serve as a personal helper to reduce user burden in multiple ways. In both a single domain and across domains, the assistant should comprehend user tendencies from limited interac- tions and learn a unified user personality. The assistant should be able to express a unified personality to new recommender systems, alleviating the cold start issue and reducing user burden. Besides, the assistant should provide proxy feedback to refine recommender systems, minimizing unnecessary user interactions. Mitigate bias. Biased recommended results can cause unfairness problems and harm the user experience. In the RAH framework, we design the assistant to represent users, generating more feedback on unseen items and thus mitigating the userâ s selection bias. Enhance User Control. Considering the pattern that the rec- ommender system actively interacts with users, it is necessary to address user control in recommendation [26, 27]. However, the ma- jority of the current recommender systems are uncontrollable, and users can only passively receive the recommendation results [8]. Therefore, in the RAH framework, the assistant should enhance user control of the recommendation results they receive and what the recommender systems learn about them, such as non-privacy data. 3 ASSISTANT In this section, we first provide an overview of the assistantâ s com- ponents and inner mechanisms. We then elaborate on how the assistant achieves human-centered goals. 3.1 Components 3.1.1 Perceive Agent. The Perceive Agent functions as the ini- tial processing point for incoming information. Specifically, in the context of recommendations, its primary task is to augment the features associated with a given item, thereby enhancing the assis- tantâ s overall comprehension. For instance, when provided with a movie name, the Perceive agent can supply additional relevant information about the movie. As illustrated in Figure 2(a), this ad- ditional information generally consists of two components: (1) a concise description of the item, such as a plot summary of the movie,
2308.09904#8
2308.09904#10
2308.09904
[ "2305.07961" ]
2308.09904#10
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
and (2) a set of specific attributes related to the item, like the movie tags. Additionally, this information enriched by the Perceive agent can further aid other agents, such as assisting the Learn Agent in extracting personalities from user behaviors. 3.1.2 Learn Agent. The Learn Agentâ s mission is to identify hu- man personalities based on interactions with items, such as â Like", â Dislike", and user ratings. Drawing inspiration from established research in recommender systems [9, 14, 24], we conceptualize human personalities as a combination of likes and dislikes. In our implementation, we input items, human feedback on items, and insights from the Perceive Agent into the Learn Agent. As depicted in Figure 2(b), the Learn Agent then generates the learned prefer- ences in response to positive feedback and the dislikes for negative feedback. Moreover, instead of direct learning, we require the agent to address two key questions: â Why might some individuals like the item?" and â Why might some individuals dislike the item?" These responses aid the agent in filtering out invalid characteristics and promoting a more nuanced understanding of personalities. 3.1.3 Act Agent. The Act Agent is responsible for generating actions based on the learned personality. The Act Agent receives an itemâ s information and a userâ s personality as input. Subse- quently, it generates a predicted action, such as "Like" when the item aligns with the userâ s preferences and "Dislike" when it aligns with their dislikes. As shown in Figure 2(c), we incorporate a chain- of-thoughts [35] approach in our implementation: (1) hypothesizing reasons for potential preference or dislikes towards the item, (2) analyzing the likely perception of the item by a human with the given personality, (3) simulating comments on the item from the perspective of the human [15, 45], and finally, (4) predicting the humanâ s reaction to the item, categorized as either â like" or â dislike." 3.1.4 Critic Agent. The core function of the Critic Agent is to evaluate the correctness of actions predicted by Act Agents. A match between the predicted action and the ground truth action (true user actions) suggests that the learned personality model aligns with the user. However, in cases of incorrect predictions, the
2308.09904#9
2308.09904#11
2308.09904
[ "2305.07961" ]