id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
listlengths
1
1
2309.00267#31
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Constitutional ai: Harmless- ness from ai feedback. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â 1901. Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, et al. 2022.
2309.00267#30
2309.00267#32
2309.00267
[ "1707.06347" ]
2309.00267#32
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311. Paul F Christiano, Jan Leike, Tom Brown, Miljan Mar- tic, Shane Legg, and Dario Amodei. 2017. Deep reinforcement learning from human preferences. Ad- vances in neural information processing systems, 30. Bosheng Ding, Chengwei Qin, Linlin Liu, Yew Ken Chia, Boyang Li, Shafiq Joty, and Lidong Bing. 2023. Is GPT-3 a good data annotator? In Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 11173â 11195, Toronto, Canada. Association for Computational Linguistics. and Swabha Understanding dataset Swayamdipta. 2022. difficulty with V-usable information. In Proceedings of the 39th International Conference on Machine Learning, volume 162 of Proceedings of Machine Learning Research, pages 5988â 6008. PMLR. Tom Everitt and Marcus Hutter. 2016. Avoiding wire- heading with value reinforcement learning. In Arti- ficial General Intelligence: 9th International Con- ference, AGI 2016, New York, NY, USA, July 16-19, 2016, Proceedings 9, pages 12â
2309.00267#31
2309.00267#33
2309.00267
[ "1707.06347" ]
2309.00267#33
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
22. Springer. Angela Fan, Mike Lewis, and Yann Dauphin. 2018. Hierarchical neural story generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 889â 898, Melbourne, Australia. Association for Computational Linguistics. Steven Y. Feng, Varun Gangal, Jason Wei, Sarath Chan- dar, Soroush Vosoughi, Teruko Mitamura, and Ed- uard Hovy. 2021.
2309.00267#32
2309.00267#34
2309.00267
[ "1707.06347" ]
2309.00267#34
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
A survey of data augmentation approaches for NLP. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 968â 988, Online. Association for Computa- tional Linguistics. Roy Fox, Ari Pakman, and Naftali Tishby. 2015. Tam- ing the noise in reinforcement learning via soft up- dates. arXiv preprint arXiv:1512.08562. Yang Gao, Christian M Meyer, Mohsen Mesgar, and Iryna Gurevych. 2019. Reward learning for efficient reinforcement learning in extractive document sum- marisation. arXiv preprint arXiv:1907.12894. Matthieu Geist, Bruno Scherrer, and Olivier Pietquin. 2019. A theory of regularized markov decision pro- In International Conference on Machine cesses. Learning, pages 2160â 2169. PMLR. Fabrizio Gilardi, Meysam Alizadeh, and Maël Kubli. 2023. Chatgpt outperforms crowd-workers for text- annotation tasks. arXiv preprint arXiv:2303.15056. Amelia Glaese, Nat McAleese, Maja Trebacz, John Aslanides, Vlad Firoiu, Timo Ewalds, Maribeth Rauh, Laura Weidinger, Martin Chadwick, Phoebe Thacker, et al. 2022. Improving alignment of dialogue agents arXiv preprint via targeted human judgements. arXiv:2209.14375. Google. 2023. Ai platform data labeling service https://cloud.google.com/ pricing. ai-platform/data-labeling/pricing# labeling_costs. Accessed: 2023-09-28. Rohan Anil Google, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Pas- sos, Siamak Shakeri, Emanuel Taropa, Paige Bai- ley, Zhifeng Chen, Eric Chu, Jonathan H.
2309.00267#33
2309.00267#35
2309.00267
[ "1707.06347" ]
2309.00267#35
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Clark, Laurent El Shafey, Yanping Huang, Kathy Meier- Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yujing Zhang, Gus- tavo Hernandez Abrego, Junwhan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Brad- bury, Siddhartha Brahma, Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob Devlin, Mark Dà az, Nan Du, Ethan Dyer, Vlad Feinberg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lu- cas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jef- frey Hui, Jeremy Hurwitz, Michael Isard, Abe Itty- cheriah, Matthew Jagielski, Wenhao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Mar- cello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Par- rish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Castro Ros, Aurko Roy, Brennan Saeta, Rajkumar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R.
2309.00267#34
2309.00267#36
2309.00267
[ "1707.06347" ]
2309.00267#36
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
So, Daniel Sohn, Simon Tokumine, Dasha Valter, Vijay Vasudevan, Ki- ran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. Palm 2 technical report.
2309.00267#35
2309.00267#37
2309.00267
[ "1707.06347" ]
2309.00267#37
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Ronald A Howard. 1960. Dynamic programming and markov processes. John Wiley. Jiaxin Huang, Shixiang Shane Gu, Le Hou, Yuexin Wu, Xuezhi Wang, Hongkun Yu, and Jiawei Han. 2022. Large language models can self-improve. arXiv preprint arXiv:2210.11610. Natasha Jaques, Shixiang Gu, Dzmitry Bahdanau, José Miguel Hernández-Lobato, Richard E Turner, and Douglas Eck. 2017.
2309.00267#36
2309.00267#38
2309.00267
[ "1707.06347" ]
2309.00267#38
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Sequence tutor: Conserva- tive fine-tuning of sequence generation models with kl-control. In International Conference on Machine Learning, pages 1645â 1654. PMLR. Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. 2020. Scaling laws for neural language models. arXiv preprint arXiv:2001.08361.
2309.00267#37
2309.00267#39
2309.00267
[ "1707.06347" ]
2309.00267#39
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
M. G. Kendall and B. Babington Smith. 1939. The Problem of m Rankings. The Annals of Mathemati- cal Statistics, 10(3):275 â 287. Minae Kwon, Sang Michael Xie, Kalesha Bullard, and Dorsa Sadigh. 2022. Reward design with language models. In The Eleventh International Conference on Learning Representations. Viet Dac Lai, Chien Van Nguyen, Nghia Trung Ngo, Thuat Nguyen, Franck Dernoncourt, Ryan A Rossi, and Thien Huu Nguyen. 2023.
2309.00267#38
2309.00267#40
2309.00267
[ "1707.06347" ]
2309.00267#40
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Okapi: Instruction- tuned large language models in multiple languages with reinforcement learning from human feedback. arXiv preprint arXiv:2307.16039. Yiheng Liu, Tianle Han, Siyuan Ma, Jiayue Zhang, Yuanyuan Yang, Jiaming Tian, Hao He, Antong Li, Mengshen He, Zhengliang Liu, et al. 2023. Summary of chatgpt/gpt-4 research and perspective towards the future of large language models. arXiv preprint arXiv:2304.01852. Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. 2023.
2309.00267#39
2309.00267#41
2309.00267
[ "1707.06347" ]
2309.00267#41
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651. James Manyika. An overview of genera- experiment with early https://ai.google/static/ 2023. bard: tive documents/google-about-bard.pdf. Accessed: 2023-08-23. an ai. Yu Meng, Martin Michalski, Jiaxin Huang, Yu Zhang, Tarek Abdelzaher, and Jiawei Han. 2023. Tun- ing language models as training data generators for augmentation-enhanced few-shot learning. In Inter- national Conference on Machine Learning, pages 24457â
2309.00267#40
2309.00267#42
2309.00267
[ "1707.06347" ]
2309.00267#42
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
24477. PMLR. Sewon Min, Xinxi Lyu, Ari Holtzman, Mikel Artetxe, Mike Lewis, Hannaneh Hajishirzi, and Luke Zettle- moyer. 2022. Rethinking the role of demonstrations: What makes in-context learning work? In Proceed- ings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 11048â 11064. Reiichiro Nakano, Jacob Hilton, Suchir Balaji, Jeff Wu, Long Ouyang, Christina Kim, Christopher Hesse, Shantanu Jain, Vineet Kosaraju, William Saunders, et al. 2021.
2309.00267#41
2309.00267#43
2309.00267
[ "1707.06347" ]
2309.00267#43
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Webgpt: Browser-assisted question- answering with human feedback. arXiv preprint arXiv:2112.09332. OpenAI. 2023a. Gpt-4 technical report. OpenAI. 2023b. Openai pricing. https://openai. com/pricing. Accessed: 2023-09-28. Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, et al. 2022.
2309.00267#42
2309.00267#44
2309.00267
[ "1707.06347" ]
2309.00267#44
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Training language models to follow instruc- tions with human feedback. Advances in Neural Information Processing Systems, 35:27730â 27744. Pouya Pezeshkpour and Estevam Hruschka. 2023. Large language models sensitivity to the order of options in multiple-choice questions. arXiv preprint arXiv:2308.11483. Paul Roit, Johan Ferret, Lior Shani, Roee Aharoni, Ge- offrey Cideron, Robert Dadashi, Matthieu Geist, Ser- tan Girgin, Léonard Hussenot, Orgad Keller, et al. 2023. Factually consistent summarization via rein- forcement learning with textual entailment feedback. arXiv preprint arXiv:2306.00186. John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, and Oleg Klimov. 2017. Proxi- mal policy optimization algorithms. arXiv preprint arXiv:1707.06347. Noam Shazeer and Mitchell Stern. 2018.
2309.00267#43
2309.00267#45
2309.00267
[ "1707.06347" ]
2309.00267#45
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Adafactor: Adaptive learning rates with sublinear memory cost. CoRR, abs/1804.04235. Nisan Stiennon, Long Ouyang, Jeffrey Wu, Daniel Ziegler, Ryan Lowe, Chelsea Voss, Alec Radford, Dario Amodei, and Paul F Christiano. 2020. Learn- ing to summarize with human feedback. Advances in Neural Information Processing Systems, 33:3008â 3021. Richard S Sutton, David McAllester, Satinder Singh, and Yishay Mansour. 1999.
2309.00267#44
2309.00267#46
2309.00267
[ "1707.06347" ]
2309.00267#46
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Policy gradient methods for reinforcement learning with function approxima- tion. Advances in neural information processing systems, 12. Romal Thoppilan, Daniel De Freitas, Jamie Hall, Noam Shazeer, Apoorv Kulshreshtha, Heng-Tze Cheng, Alicia Jin, Taylor Bos, Leslie Baker, Yu Du, et al. 2022. Lamda: Language models for dialog applica- tions. arXiv preprint arXiv:2201.08239. Boshi Wang, Sewon Min, Xiang Deng, Jiaming Shen, You Wu, Luke Zettlemoyer, and Huan Sun. 2022a. Towards understanding chain-of-thought prompting: An empirical study of what matters. arXiv preprint arXiv:2212.10001. Peiyi Wang, Lei Li, Liang Chen, Dawei Zhu, Binghuai Lin, Yunbo Cao, Qi Liu, Tianyu Liu, and Zhifang Sui. 2023. Large language models are not fair evaluators. arXiv preprint arXiv:2305.17926. Shuohang Wang, Yang Liu, Yichong Xu, Chenguang Zhu, and Michael Zeng. 2021a. Want to reduce label- ing cost? gpt-3 can help. In Findings of the Associ- ation for Computational Linguistics: EMNLP 2021, pages 4195â 4205. Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc V Le, Ed H Chi, Sharan Narang, Aakanksha Chowdhery, and Denny Zhou. 2022b.
2309.00267#45
2309.00267#47
2309.00267
[ "1707.06347" ]
2309.00267#47
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Self-consistency improves chain of thought reasoning in language models. In The Eleventh International Conference on Learning Representations. Zirui Wang, Adams Wei Yu, Orhan Firat, and Yuan Cao. 2021b. Towards zero-label language learning. arXiv preprint arXiv:2109.09193. Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned language mod- els are zero-shot learners. In International Confer- ence on Learning Representations. Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits rea- soning in large language models. Advances in Neural Information Processing Systems, 35:24824â 24837. Ronald J Williams. 1992.
2309.00267#46
2309.00267#48
2309.00267
[ "1707.06347" ]
2309.00267#48
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Simple statistical gradient- following algorithms for connectionist reinforcement learning. Machine learning, 8:229â 256. Lijun Wu, Fei Tian, Tao Qin, Jianhuang Lai, and Tie- Yan Liu. 2018. A study of reinforcement learning for neural machine translation. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3612â 3621. Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016.
2309.00267#47
2309.00267#49
2309.00267
[ "1707.06347" ]
2309.00267#49
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Googleâ s neural machine translation system: Bridging the gap between human and machine trans- lation. arXiv preprint arXiv:1609.08144. Yuxiang Wu and Baotian Hu. 2018. Learning to extract coherent summary via deep reinforcement learning. In Proceedings of the AAAI Conference on Artificial Intelligence, page 5602. Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, and Yuandong Tian. 2023.
2309.00267#48
2309.00267#50
2309.00267
[ "1707.06347" ]
2309.00267#50
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Rlcd: Reinforcement learning from contrast distillation for language model alignment. Daniel M Ziegler, Nisan Stiennon, Jeffrey Wu, Tom B Brown, Alec Radford, Dario Amodei, Paul Chris- tiano, and Geoffrey Irving. 2019. Fine-tuning lan- arXiv guage models from human preferences. preprint arXiv:1909.08593. # A RLHF Preliminaries We review the RLHF pipeline introduced in Sti- ennon et al. (2020); Ouyang et al. (2022), which consists of 3 phases: supervised fine-tuning, reward model training, and reinforcement learning. # A.1 Supervised Fine-tuning A pre-trained LLM is fine-tuned on a high quality labeled dataset for a downstream task (e.g. summa- rization) using token-level supervision to produce a supervised fine-tuned (SFT) model Ï
2309.00267#49
2309.00267#51
2309.00267
[ "1707.06347" ]
2309.00267#51
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
SF T . # A.2 Reward Modeling Given an input x, we sample a pair of responses (y1, y2) â ¼ Ï from one or more models, where oftentimes Ï is the SFT model. The input and responses are sent to human annotators to rate which response is better according to some cri- teria. These annotations form a dataset of triplets D = {(x, yw, yl)}, where yw and yl are the pre- ferred and non-preferred responses, respectively. A reward model (RM) rÏ is trained by minimizing the following loss: £.(6)= -E (yw yi)~D [logo(ra(e,4w) ~ role. m))]; where Ï is the sigmoid function. A.3 Reinforcement Learning A policy Ï RL is initialized from the SFT model weights and then optimized with reinforcement learning to maximize the reward given by the RM, which serves as a proxy for human preferences. Op- tionally, a Kullback-Leibler (KL) divergence term DKL is added to the objective to penalize Ï RL for deviating from the original SFT policy Ï SF T , con- trolled by the hyperparameter β (Fox et al., 2015; Geist et al., 2019). The KL loss helps prevent Ï RL from drifting into a region where it generates θ language that is highly rewarded by the RM yet consists of low-quality or unnatural language - a phenomenon known as â reward hackingâ (Everitt and Hutter, 2016; Amodei et al., 2016). The op- timization objective is described by the equation below:
2309.00267#50
2309.00267#52
2309.00267
[ "1707.06347" ]
2309.00267#52
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
J@= &§E y~re(|a) [(1- B)ra(ule) ~ BDxx (8 (yl) n°"? (yla))], where β is a hyperparameter between 0 and 1. # B Position Bias in LLM Labelers Model Size PaLM 2 L PaLM 2 S PaLM 2 XS Table 4: Position bias is more prevalent in smaller model sizes, measured by the percentage of examples where the LLM prefers the same position even after swapping the order of candidates (â % Same Position Preferredâ ). Analysis is conducted using the â Detailed + CoT 0-shotâ prompt for the summarization task. Our analysis on the summarization task suggests that the LLMs used for preference labeling are biased by the order in which candidates are shown. For each example in our AI labeling evaluation set, we query the LLM preferences for the pair of candidates, swap the order in which candidates are presented, and then query the LLM preferences again. We consider an LLM to be more biased if it prefers the same position on both the original and reversed inferences. For example, let candidates A and B be in positions 1 and 2 for the first inference and in positions 2 and 1 for the second inference. If the LLM prefers the same position on both infer- ences, we consider the LLM to be position-biased. We measure position bias by computing â % Same Position Preferredâ - the percentage of inference pairs where this occurs. A higher metric value indicates a more biased LLM. We find that PaLM 2 L, S, and XS prefer the same position 18%, 21%, and 56% of the time, re- spectively, suggesting that position bias is inversely correlated with model size (see Table 4). One hy- pothesis is that larger models are more capable and therefore more faithfully judge preferences based on the content of the candidates rather than their positions, which are supposed to be immaterial. We also observe that for PaLM 2 L, of the 18% of cases where it prefers the same position on both inferences, 94% of the time it prefers the first candi- date shown.
2309.00267#51
2309.00267#53
2309.00267
[ "1707.06347" ]
2309.00267#53
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
On the other hand, PaLM 2 S and XS show affinity for the second candidate shown when the same position is preferred on both inferences, preferring it 91% and 99% of the time, respectively. These biases are statistically significant under a two-sided binomial test at α = 0.05. # C Dataset Details For summarization, we use the filtered Reddit TL;DR dataset (Stiennon et al., 2020), containing posts from Reddit12 that have been filtered to en- sure high quality. The dataset contains 123k posts, where â ¼5% is held out as a validation set. Additionally, we use OpenAIâ s human prefer- ence dataset created from the filtered Reddit TL;DR dataset. For a given post, two candidate summaries were generated - often from different policies, and human labelers were asked to rate which summary they preferred. The total dataset comprises 92k pairwise comparisons. For helpful and harmless dialogue generation, we use Anthropicâ s Helpful and Harmless prefer- ence datasets13 (Bai et al., 2022a). Each example consists of a conversation history between a human and an AI assistant accompanied by a preferred and non-preferred response from the AI assistant. Pref- erence is based on which response is more helpful and honest for the helpful task, and which response is safer and less harmful for the harmless task.
2309.00267#52
2309.00267#54
2309.00267
[ "1707.06347" ]
2309.00267#54
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Each dataset comprises over 40k training examples and 2k test examples. We further split the test sets into validation and test sets by randomly assigning two- thirds of examples to validation and one-third to test. # D LLM Labeling Details For LLM labeling, we set a maximum input con- text length of 4096 tokens. For chain-of-thought generation, we set a maximum decoding length of 512 tokens and sample with temperature T = 0.0 (i.e. greedy decoding). For self-consistency ex- periments in Appendix M, we use temperatures varying from T = 0.3 to T = 1.0 with top-K sampling (Fan et al., 2018), where K = 40. In Section 4.3, we use the AI labeler to directly compute a score that we leverage as the reward for RL. We use the following prompt: â You are an ex- pert summary rater. Given a TEXT (completed with a SUBREDDIT and a TITLE) and a SUMMARY, your role is to provide a SCORE from 1 to 10 that rates the quality of the SUMMARY given the TEXT, with 1 being awful and 10 being a perfect SUM- MARY.â , followed by the input Reddit post, then # 12www.reddit.com 13We use the helpful-base and harmless-base https://huggingface.co/ datasets datasets/Anthropic/hh-rlhf. from
2309.00267#53
2309.00267#55
2309.00267
[ "1707.06347" ]
2309.00267#55
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
the summary to score preceded by â SUMMARY: â , and a final â SCORE: â . PaLM 2 models are publicly available through Google Cloudâ s Vertex AI14, though we cannot guarantee full reproducibility as the models acces- sible through Google Cloud are subject to change. # E Post-RL Response Formatting For summarization, we observed that summaries generated by RLHF and RLAIF policies often in- cluded superfluous symbols like periods or spaces at the end of the response - possibly due to â reward hackingâ . Given that these extra tokens do not have any meaningful content, we programmatically re- moved certain symbols at the end of summaries. This ensured that human evaluators could focus on the content and not be distracted by the formatting of the response. # F REINFORCE for Language Models Consider a deterministic, finite-horizon MDP M = (X , A, R, P, γ) (Howard, 1960). At each step t, given the current state Xt â X and the next action At â A, the model receives a reward Rt = R(Xt, At) and transitions to the next state Xt+1 = P (Xt, At). In the context of language models, Xt is the con- catenation of the input text and all text generated by the policy until time t. Action At is the token from the considered vocabulary decoded at time t by the stochastic policy Ï Î¸(·|Xt), where θ rep- resents the policy parameters. Finally, the reward Rt is given by the RM. The RM is only evaluated when the language model response has been fully generated; all rewards prior to the final token are set to 0, while the reward corresponding to the final token is set to RT . The cumulative sum of rewards received when following the policy 79 from time-step ¢ is called the return.
2309.00267#54
2309.00267#56
2309.00267
[ "1707.06347" ]
2309.00267#56
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Generally, it is defined as Z, = yw y°â 'Rs. However, since only the terminal reward is non-zero and we set y = 1, the return can be simplified to Z, = Rr. t=0 generated un- der Ï Î¸, the policy gradient loss from REINFORCE is then defined as follows: Lea(@) = â Y7 log ma Al Xi)(Zi â V(X), t 14https://cloud.google.com/vertex-ai/ docs/generative-ai/learn/models where the bar notation denotes that no gradient is passed through the advantage term during back- propagation. The baseline value function V;t (a) estimates the return-to-go Z; when following the policy 7g and is parameterized by ~ (Williams, 1992; Sutton et al., 1999). It is trained with the following loss: (Zt â V Ï Ï (Xt))2. LV (Ï ) = t In practice, we optimize the regularized objec- tive in Sec. A.3. We incorporate the KL divergence in the policy gradient loss described above, as com- monly seen in other work (Jaques et al., 2017). # G Model Training Details SFT models for the summarization task are trained on the Reddit TL;DR dataset, with a batch size of 128 for a single epoch. We use the Adafac- tor (Shazeer and Stern, 2018) optimizer with a learning rate of 10â 5, and the maximum input and output lengths are 1024 and 128 tokens, respec- tively. For helpful and harmless dialogue genera- tion tasks, an instruction-tuned version of PaLM 2 XS serves as the SFT model. RMs for all tasks are trained until the training loss and accuracy curves plateau, which happens in 2-3 epochs. We use the Adafactor optimizer with a learning rate of 10â
2309.00267#55
2309.00267#57
2309.00267
[ "1707.06347" ]
2309.00267#57
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
5. Batch size is 128 for summarization RMs and 32 for RMs of other tasks. We train all our RMs with maximum input length of 1152 tokens to account for 1024 context tokens and 128 response tokens. We report the accuracies of the RMs in Appendix H. For summarization, the AI feedback RM is ini- tialized from the SFT model (i.e. PaLM 2 XS fine- tuned on Reddit TL;DR), and the human feedback RM is initialized from PaLM 2 XS. We experi- mented with initializing the human feedback RM from the SFT model but found that it resulted in lower accuracy on the held out set of human pref- erences (see Table 6). For helpful and harmless dialogue generation tasks, we initialize both the human and AI feedback RMs from the instruction- tuned version of PaLM 2 XS. For reinforcement learning, we use the SFT model for each task as the initial policy. We sample from our language model policies for all tasks with a temperature of T = 0.9 to encourage exploration. We train with a batch size of 128 and learning rate of 10â 5 for 8 epochs. We set β = 0.05 for the KL divergence loss. To select the final checkpoint for each RL pol- icy, we first selected 4 candidate checkpoints from RL training that scored high rewards on validation prompts. We then prompted an off-the-shelf LLM to judge the win rate of the RL checkpointâ
2309.00267#56
2309.00267#58
2309.00267
[ "1707.06347" ]
2309.00267#58
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
s sum- maries vs. the SFT policyâ s summaries. We also conducted manual inspection of a dozen examples. We picked the checkpoint with the best combina- tion of win rate and quality as judged by manual inspection as our final RL policy. # H Reward Model Accuracy Task Summarization Helpful Dialogue Harmless Dialogue Human Feedback 79.3% 76.0% 72.1% AI Feedback 74.2% 67.8% 69.7% Table 5: Pairwise accuracies of human feedback and AI feedback reward models across all tasks. Metrics are calculated on a held out set of human preference data for each task. Initialization PaLM 2 XS SFT Human Feedback 79.3% 78.7% AI Feedback 73.0% 74.2% Table 6: Results of initializing the summarization RMs on PaLM 2 XS vs. the SFT model. RM Variant Trained on â Base 0-shotâ labels Trained on labels from PaLM 2 XS AI Feedback 77.9% 66.4% Table 7: Accuracy values for variants of RMs trained on AI labels for the task of summarization. Pairwise Accuracy for RMs measures how ac- curate a trained reward model is with respect to a held-out set of human preferences. Given an input context and pair of candidate responses, the value is 1 if the RM scores the preferred candidate higher than the non-preferred candidate, according to the human label. Otherwise the value is 0. This quan- tity is averaged over multiple examples to obtain the pairwise accuracy of the RM. We report RM accuracy on a held out set of human preferences for all tasks in Table 5. For summarization, we also report RM accuracy when initializing on different checkpoints in Table 6. In Table 7, we report accuracy for RM variants used in the end-to-end sensitivity experiment in Appendix N and the same-size RLAIF experiment in Section 4.2. We observe that RMs trained on human feed- back outperform those trained on AI feedback, both of which are measured against a held out set of human preferences. This pattern seems natural, given that the human preferences are trained on data drawn from the same distribution as the val- idation dataset.
2309.00267#57
2309.00267#59
2309.00267
[ "1707.06347" ]
2309.00267#59
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
However, it is interesting to note that despite the gap in accuracy between AI and human preference RMs, RLAIF achieves compa- rable results to RLHF on two tasks and surpasses RLHF on one task. Additionally, we note that the summarization RMs trained on â Base 0-shotâ and â Detailed + CoT 0-shotâ (i.e. the default prompt- ing technique) achieve accuracies of 77.9% and 74.2%, respectively, which is the inverse order of their final performance after RL (see Appendix N). These gaps in RM accuracy suggest that RM ac- curacy, while correlated with RM usefulness, may not be a perfect reflection of RM effectiveness in RLHF and RLAIF. Ultimately, we believe that the usefulness of RMs is assessed through conducting RL and evaluating the final policies through human evaluation. # I Human Evaluation Details To conduct human evaluation, in total we created â ¼2k unique rating instances. Each instance com- prised a single context and three distinct model responses (e.g. responses from SFT, RLAIF, and RLHF policies), resulting in a total of â ¼6k unique (context, response) pairs subjected to human evalu- ation. Additionally, each instance was assessed by three independent raters, resulting in â ¼18k (con- text, response, rating) tuples. We measure the inter-annotator agreement with Kendallâ s Coefficient of Concordance W (Kendall and Smith, 1939) - a non-parametric statistic for as- sessing the agreement among multiple raters rank- ing multiple items. The values of Kendallâ s W range from 0 to 1, where 0 indicates perfect dis- agreement and 1 indicates perfect agreement. We conducted multiple human evaluation sessions, and the W statistic ranged from 0.6-0.7, indicating a reasonable level of agreement. # J Controlling for Response Length Response length often can influence human evalua- torsâ perception of quality (Stiennon et al., 2020), and our various policies generate responses that differ in length. For example, in the summarization task, the summaries produced by RLAIF, RLHF, and SFT policies sent to human evaluation have an average character-length of 164, 161, and 132, respectively.
2309.00267#58
2309.00267#60
2309.00267
[ "1707.06347" ]
2309.00267#60
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
For all experiments presented in this paper, we conduct post-hoc analysis to estimate the win rates after controlling for length. We take an approach similar to Stiennon et al. (2020) and calculate the â length-adjusted win rate of policy A vs. policy Bâ . Given policy A, we train a logistic regression model where the input is the ratio of the policy Aâ s response length to policy Bâ s summary length (in characters), and the target is a binary label indicating whether policy Aâ s response was preferred over policy Bâ s response. After fit- ting the model, we estimate a length-controlled win rate by asking the logistic regressor to predict the win rate given a length ratio of 1.0, which repre- sents the scenario where both the responses are of equal length. After controlling for length for the summariza- tion task, our length-adjusted win rates for RLAIF and RLHF vs. SFT are 59% and 61%, respectively (see Table 8). Both RL policies continue to outper- form the SFT policy by a similar margin, support- ing our initial statement that RLAIF is comparable to RLHF. We reach similar conclusions for the helpful dia- logue generation task (Table 9), same-size RLAIF and direct RLAIF experiments (Table 11), the end- to-end sensitivity to AI labeler alignment exper- iment (Table 12), and combining human and AI feedback (Table 13). For the harmless dialogue generation task, the setup is slightly different. Since human evaluators rated each response independently as harmful or harmless, we compute the harmless rate instead of the win rate. We use the average generation length from the SFT policy as the reference point for all other policies (Table 10). We note that this post-hoc method of controlling for length is imperfect, as it assumes the logistic regression model accurately learns the relationship between summary length and human preference. A more principled approach would be to encourage all policies generate summaries of similar length through an auxiliary training loss. Models RLAIF vs SFT RLHF vs SFT RLAIF vs RLHF Length uncorrected 71% 73% 50% Length corrected 59% 61% 47% Table 8: Length-controlled win rate for the summariza- tion task.
2309.00267#59
2309.00267#61
2309.00267
[ "1707.06347" ]
2309.00267#61
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Models RLAIF vs SFT RLHF vs SFT RLAIF vs RLHF Length uncorrected 63% 64% 52% Length corrected 61% 61% 50% Table 9: Length-controlled win rate for the helpful dia- logue generation task. # K Combining Human and AI Feedback We investigate the effectiveness of combining hu- man feedback and AI feedback on the task of sum- marization. We refer to this approach as RLHF + RLAIF and compare it against RLHF. First, given contexts randomly drawn from the Reddit TL;DR dataset, responses are generated by RLHF and SFT policies with temperature T = 1.0. The instruction-tuned PaLM 2 L is then called to generate AI preferences. Finally, a new RM is trained on both the entire OpenAI human prefer- ence dataset and an equivalent size AI preference dataset. We observe that RLHF + RLAIF does not im- prove beyond RLHF alone. RLHF + RLAIF and RLHF achieve win rates of 71% and 74% over SFT, respectively. The difference in win-rate is not statis- tically significant. When compared head-to-head, raters prefer both policies equally. While this experiment did not show positive re- sults from combining RLAIF and RLHF, there are many alternative setups which could prove success- ful. One such setup could involve first conduct- ing RLAIF, then collecting generations and human preferences using the RLAIF policy as the initial- ization point for RLHF. In this curriculum learning approach, RLAIF can be viewed as a â warm-upâ policy, which is then refined with RLHF. Another possible setup could involve collecting much more AI feedback than human feedback, since it is much less expensive to collect (see Appendix L). We leave this exploration to future work. Models SFT RLHF RLAIF Length uncorrected 64% 76% 88% Length corrected 64% 78% 91% Table 10: Length-controlled harmless rate for the harm- less dialogue generation task. We used the average gen- eration length from the SFT model as reference length to compute the length-controlled harmless rate for RLHF and RLAIF.
2309.00267#60
2309.00267#62
2309.00267
[ "1707.06347" ]
2309.00267#62
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Models Length uncorrected Length corrected Same-size RLAIF vs SFT Direct RLAIF vs SFT Direct RLAIF vs Same-size RLAIF 68% 74% 60% 59% 65% 56% Table 11: Length-controlled win rate for same-size RLAIF and direct RLAIF. # L Cost of LLM vs. Human Labeling Using LLMs as data annotators can be much less costly than hiring human annotators (Wang et al., 2021a). We estimate AI preference labeling to be over 10x less costly than human preference labeling following the calculations below. At the time of writing, GPT-4 charges $0.03 USD and $0.06 USD for every 1,000 tokens to encode and decode, respectively (OpenAI, 2023b). For labeling TL;DR preferences with an LLM, our average token lengths were as follows:
2309.00267#61
2309.00267#63
2309.00267
[ "1707.06347" ]
2309.00267#63
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
1. Input prompt length - 830 tokens (using the â Detailed + CoT 0-shotâ prompt) 2. Generated chain-of-thought rationale - 61 to- kens Additionally, to debias position, we repeat each labeling procedure after inverting the order in which a pair of responses are shown. Our estimated AI labeling cost per example is $0.06 USD15. In comparison, Google Cloudâ s human annota- tion service charges approximately $0.11 USD / 50 words for classification tasks at the time of writ- 152 inferences * (830 encoder tokens * $0.03 / 1,000 tokens + 61 decoder tokens * $0.06 / 1,000 tokens) = $0.057 â ¼ = $0.06 Models Length uncorrected Length corrected Base RLAIF vs SFT Detailed RLAIF vs SFT Base RLAIF vs Detailed RLAIF 63% 67% 41% 59% 63% 45% Table 12: Length-controlled win rate for the experiment on end-to-end sensitivity to AI labeler alignment. Base RLAIF and Detailed RLAIF correspond to â Base 0-shotâ RLAIF and â Detailed CoT 0-shotâ RLAIF described in Appendix N, respectively. Models Length uncorrected Length corrected RLHF + RLAIF vs SFT RLHF vs SFT RLHF + RLAIF vs RLHF 71% 74% 48% 61% 67% 46% Table 13: Length-controlled win rate for experiments combining human and AI feedback. ing16 (Google, 2023). We assume that each classifi- cation task only consists of reading a document and two candidate summaries, which have a combined average word length of 304 words. We estimate the human labeling cost per example to be $0.67 USD (304 words * $0.11 / 50 words). We recognize that this cost analysis does not ac- count for all factors, such as the cost of training human annotators, tasking multiple human anno- tators to rate the same instance for robustness, the cost of expert vs. crowd-sourced annotators, or the cost of setting up LLM labeling.
2309.00267#62
2309.00267#64
2309.00267
[ "1707.06347" ]
2309.00267#64
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
# M Self-Consistency For chain-of-thought prompts, we also experiment with self-consistency (Wang et al., 2022b) - a technique to generate robust chain-of-thought ra- tionales. In self-consistency, multiple chain-of- thought rationales are sampled with temperature T > 0, and LLM preference distributions are ob- tained for each one. The results are then averaged 16Google Cloud charges between $90 and $129 per 1,000 units, where each unit is 50 words for a classification task. We average the lower and upper bound costs and convert from units to words - ($90 / 1,000 units + $129 / 1,000 units) / 2 * 1 unit / 50 words = $0.1095 USD / 50 words Self-Consistency 1 sample, T=0.0 16 samples, T=0.3 16 samples, T=0.5 16 samples, T=0.7 16 samples, T=1.0 AI Labeler Alignment 78.0% 76.2% 75.1% 74.0% 72.8% Table 14: Sampling multiple chain-of-thought rationales with T > 0 results in lower alignment with human preferences. Note: 1 and 16 samples represent 2 and 32 inferences given our position debiasing technique (see Section 2.1.1). to obtain the final preference distribution. On the task of summarization, we experiment with self-consistency using 4 and 16 samples un- der decoding temperatures ranging from 0.3 to 1.0 (see Figure 14)17. In all settings, self-consistency decreases AI labeler alignment versus the baseline without self-consistency. Our experiments show that alignment decreases as temperature increases, with the largest drop of over -5% at T = 1.0. In our experiments, using 4 vs. 16 self-consistency samples does not impact AI labeler alignment. Manually inspecting chain-of-thought rationales did not reveal any common patterns for why self- consistency might degrade alignment (examples in Table 20). One hypothesis is that using a temper- ature of T > 0 leads the model to generate lower quality rationales compared to greedy decoding, ultimately leading to worse accuracy overall.
2309.00267#63
2309.00267#65
2309.00267
[ "1707.06347" ]
2309.00267#65
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
# N End-to-end Sensitivity to AI Labeler Alignment We assess the end-to-end sensitivity of the RLAIF policies to AI labeler alignment on the task of sum- marization. Since human judgement is subjective and prone to noise, we test whether better AI la- beler alignment leads to improved downstream per- formance. We train two RLAIF policies that only differ in the prompting technique used for AI la- beling - â Base 0-shotâ and â Detailed CoT 0-shotâ , yielding 76.1% and 78.0% AI labeler alignment, respectively. When compared head-to-head, human evalua- tors prefer â Detailed CoT 0-shotâ RLAIF 59% of the time over â Base 0-shotâ RLAIF18. This result suggests that small gains in AI labeler alignment may lead to noticeable improvements in the final 17Results of using 4 samples are not shown because they only differ from the 16-sample results by ±0.4%. 18Result is statistically significantly different from 50%.
2309.00267#64
2309.00267#66
2309.00267
[ "1707.06347" ]
2309.00267#66
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
RL policies. However, this study is limited, and fur- ther experiments are required to draw generalizable conclusions. Preamble A good summary is a shorter piece of text that has the essence of the original. ... of its possible summaries, output 1 or 2 to indicate which summary best adheres to coherence, accuracy, coverage, and overall quality as defined above. Given a piece of text and two Exemplar »»»» Example »»»» Text - We were best friends over 4 years ... Summary 1 - Broke up with best friend, should I wish her a happy birthday... Summary 2 - should I wish my ex happy birthday, I broke no contact, Iâ
2309.00267#65
2309.00267#67
2309.00267
[ "1707.06347" ]
2309.00267#67
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
m trying to be more patient, Iâ m too needy, and I donâ t want her to think Iâ ll keep being that guy. And what do you think of no contact? Preferred Summary=1 # »»»» Follow the instructions and the example(s) above »»»» # Sample to Annotate # Text - {text} Summary 1 - {summary1} Summary 2 - {summary2} # Ending Ending # Preferred Summary= Table 15: An example of a prompt fed to an off-the-shelf LLM to generate AI preference labels for summarization. {text}, {summary1}, and {summary2} are populated with unlabeled examples, and a preference distribution is obtained by computing the softmax of the log-probabilities of generating the tokens â 1â vs. â 2â .
2309.00267#66
2309.00267#68
2309.00267
[ "1707.06347" ]
2309.00267#68
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Context I'm guessing that most people today have some form of DVR, whether it be a Tivo or the one yourent from your local television service provider. | personally have a Ceton InfiniTV 4 card inmy computer with 2 Switched Digital Video Tuning Adapters that provides for 4 HD streams ata time. That along with a 2 TB Hard Drive makes for a pretty decent queue of recorded television shows for my fiance and | to watch when we feel like it. Today | sat down on the couch and after flipping through the recorded TV queue decided | didn't really want to watch any of those things, so | hit the "Guide" button on my remote to see what was on It dawned on me that this was the first time in at least 2 months that | had watched live television, when | ried to skip forward past a commercial and realized that | couldn't lol. So yeah, how about you? 1 Clear response ranking Model outputs | have a DVR and | haven't watched live television in at least 2 months. When was the Ihave a DVRand | realized today that | haven't watched live television in at least 2 last time you watched live television? months. When was the last time you watched live TV? vamesense | 2 a2bestresponce SE] Realized that'd been watching nothing but DVR'd TV for at least two months. When is I haven't watched live television in at least 2 months, and I'm wondering how long it's, the last time you actually watched live television? been since you've done the same. i] come] 2
2309.00267#67
2309.00267#69
2309.00267
[ "1707.06347" ]
2309.00267#69
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Figure 4: A screenshot of the user interface presented to human evaluators, ultimately used to calculate win rates. Raters are shown a context and asked to rank the quality of candidate responses. â Baseâ preamble You are an expert summary rater. two of its possible summaries, output 1 or 2 to indicate which summary is better. Given a piece of text and â Detailedâ preamble A good summary is a shorter piece of text that has the It tries to accomplish the same essence of the original. purpose and conveys the key information from the original post. quality: Below we define four evaluation axes for summary coherence, accuracy, coverage, and overall quality. Coherence: This axis answers the question â how coherent is the summary on its own?â A summary is coherent if itâ s easy to understand when read on its own and free of English errors. A summary is not coherent if itâ s difficult to understand what the summary is trying to say. important that the summary is understandable than it being free of grammar errors. Accuracy: information in the summary accurately match the post?â A summary is accurate if it doesnâ t say things that arenâ t in the article, it doesnâ t mix up people, and generally is not misleading. Coverage: the summary cover the important information in the post?â A summary has good coverage if it mentions the main information from the post thatâ s important to understand the situation described in the post. someone reading only the summary would be missing several important pieces of information about the situation in the post. purpose of the original post (e.g. Overall quality: This axis answers the question â how good is the summary overall at representing the post?â This can encompass all of the above axes of quality, as well as others If itâ s hard to find ways to make you feel are important. the summary better, the overall quality is good. If there are lots of different ways the summary can be made better, the overall quality is bad. You are an expert summary rater. two of its possible summaries, output 1 or 2 to indicate which summary best adheres to coherence, accuracy, coverage, and overall quality as defined above.
2309.00267#68
2309.00267#70
2309.00267
[ "1707.06347" ]
2309.00267#70
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Table 16: The â Baseâ and â Detailedâ preambles given to the LLM labeler to obtain preference labels for the summarization task. Preamble A good summary is a shorter piece of text that has the essence of the original. It tries to accomplish the same purpose and conveys the key information from the original post. quality: Coherence: This axis answers the question â how coherent is the summary on its own?â A summary is coherent if itâ s easy to understand when read on its own and free of English errors. A summary is not coherent if itâ s difficult to understand what the summary is trying to say. important that the summary is understandable than it being free of grammar errors. Accuracy: information in the summary accurately match the post?â A summary is accurate if it doesnâ t say things that arenâ t in the article, it doesnâ t mix up people, and generally is not misleading. Coverage: the summary cover the important information in the post?â A summary has good coverage if it mentions the main information from the post thatâ s important to understand the situation described in the post. someone reading only the summary would be missing several important pieces of information about the situation in the post. purpose of the original post (e.g. Overall quality: This axis answers the question â how good is the summary overall at representing the post?â This can encompass all of the above axes of quality, as well as others you feel are important. If itâ s hard to find ways to make If there the summary better, the overall quality is good. are lots of different ways the summary can be made better, the overall quality is bad. You are an expert summary rater. two of its possible summaries, explain which summary best adheres to coherence, accuracy, coverage, and overall quality as defined above. Sample to Annotate # Text - {text} Summary 1 - {summary1} Summary 2 - {summary2} Ending Consider the coherence, accuracy, coverage, and overall quality of each summary and explain which one is better. Rationale: Table 17: The prompt used for the â Detailed + CoT 0-shotâ for summarization. For CoT prompts, we first decode a response from the LLM and then concatenate it with the original prompt and the ending â Preferred Summary=â before following the scoring procedure in Section 2.1 to obtain a preference distribution. # Preamble
2309.00267#69
2309.00267#71
2309.00267
[ "1707.06347" ]
2309.00267#71
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Preamble # Exemplar Exemplar Sample to Annotate A good summary is a shorter piece of text that has the essence of the original. ... two of its possible summaries, explain which summary best adheres to coherence, accuracy, coverage, and overall quality as defined above. »»»» Example »»»» Text - We were best friends over 4 years ... Summary 1 - Broke up with best friend, should I wish her a happy birthday... Summary 2 - should I wish my ex happy birthday, I broke no contact, Iâ
2309.00267#70
2309.00267#72
2309.00267
[ "1707.06347" ]
2309.00267#72
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
m trying to be more patient, Iâ m too needy, and I donâ t want her to think Iâ ll keep being that guy. And what do you think of no contact? Thoughts on Summary 1 - Coherence - 7. understandable, though it could be written with better grammar. Accuracy - 9. that arenâ t in the original text, and isnâ t misleading. Coverage - 6. The summary covers most of the important information in the post and conveys the gist of the original text. contactâ and could have mentioned the smothering/neediness to be more complete. Overall Quality - 7. the post fairly well with only minor areas where it could be improved. Rationale: The summary is generally Rationale: The summary doesnâ t say things Rationale: However, it places more emphasis on â no Rationale: The summary represents Thoughts on Summary 2 - Coherence - 3. has several grammatical errors. Accuracy - 4. author broke no contact, but this is incorrect. Otherwise, it is accurate. Coverage - 8. in the original text. Overall Quality - 4. misleading and doesnâ t convey the original textâ s key points well. Rationale: The summary is long-winded and Rationale: The summary mentions that the Rationale: The summary covers the key points Rationale: The summary is somewhat Preferred Summary=1 »»»» Follow the instructions and the example(s) above »»»» Text - {text} Summary 1 - {summary1} Summary 2 - {summary2} Thoughts on Summary 1 - # Ending Ending Table 18: The template used for the â Detailed + CoT 1-shotâ prompt for summarization, with some text removed for brevity. Sample to Annotate
2309.00267#71
2309.00267#73
2309.00267
[ "1707.06347" ]
2309.00267#73
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Text - I met my current girlfriend online around 6 months ago when another one of our online friends was going through some problems. However, I feel incredibly insecure as she has yet to begin I always hear stories about how people change college. and their minds change about relationships or their s/o especially in an international long distance (Canada/USA). Iâ m scared sheâ ll meet other people and eventually fall in love with someone else. She reassures me that she wonâ t, and I know sheâ s very mature and serious for her age, but I still canâ t help but feel uneasy. Summary 1 - I have an amazing relationship with my girlfriend but feel incredibly insecure with her due to lack of college graduation and apprehensions about other people finding her Summary 2 - Iâ m very insecure about my current girlfriend because she hasnâ t begun college yet and Iâ m worried sheâ ll meet someone else and fall in love with someone else.
2309.00267#72
2309.00267#74
2309.00267
[ "1707.06347" ]
2309.00267#74
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Am I being ridiculous? CoT 0-shot Rationale Rationale: Summary 2 is better. Summary 1 is not coherent because itâ s difficult to understand what the summary is trying to say. accurate because it says that the girlfriend has graduated from college, which is not true. accurate. the main information from the post thatâ s important to understand the situation described in the post. CoT 1-shot Rationale Thoughts on Summary 1 - Coherence - 7. understandable, but itâ s a bit wordy and could be more concise. Accuracy - 8. doesnâ t mention that the girlfriend hasnâ
2309.00267#73
2309.00267#75
2309.00267
[ "1707.06347" ]
2309.00267#75
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
t begun college yet. Coverage - 7. The summary covers the main points of the post, but it doesnâ t mention that the girlfriend hasnâ t begun college yet. Overall Quality - 7. good, but it could be more concise and it doesnâ t mention that the girlfriend hasnâ t begun college yet. # Rationale: The summary is generally # Rationale: The summary is accurate, but it # Rationale: # Rationale: The summary is generally Thoughts on Summary 2 - Coherence - 9. to understand. Accuracy - 9. The summary is accurate and mentions that the girlfriend hasnâ
2309.00267#74
2309.00267#76
2309.00267
[ "1707.06347" ]
2309.00267#76
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
t begun college yet. Coverage - 9. of the post and mentions that the girlfriend hasnâ t begun college yet. Overall Quality - 9. accurate, and covers the main points of the post. Table 19: An example of the different chain-of-thought rationales produced by the 0-shot (â Detailed + CoT 0-shotâ ) vs. 1-shot (â Detailed + CoT 1-shotâ ) prompts for summarization. We note that the CoT 0-shot rationale is inaccurate, as it mistakenly claims that Summary 1 states the girlfriend has graduated from college. Sample to Annotate
2309.00267#75
2309.00267#77
2309.00267
[ "1707.06347" ]
2309.00267#77
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Text - I feel that out of principle I should be refunded the adoption fee since the agencyâ s foster home infected the kittens with the parasite. Do I have any legal right home and there are 20 other cats. to ask for the fee back? Or help with the cost of treating? They had a disclaimer that they would not be held liable for any vet bills incurred but I feel that as an agency whose main purpose is finding forever home for â healthy, sociable kittensâ (as their website suggests) should be held liable in some way.
2309.00267#76
2309.00267#78
2309.00267
[ "1707.06347" ]
2309.00267#78
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Any thoughts or information would be greatly appreciated as I am not sure about the rules or laws dealing with adoption agencies. Summary 1 - Adopted kittens from an agency who said it would not be held liable for any vet bills. are diagnosed with Giardia. the adoption fee back and if so how to proceed. Summary 2 - Adopted kittens have parasite. I can get reimbursed from the agency. Want to know if Rationale @ T = 0.0 Rationale: Summary 1 is better. and has good coverage. Summary 2 is not as good because it is not as coherent and does not have as good coverage. It is coherent, accurate, It also has good overall quality. Rationale @ T = 1.0 Rationale: Summary 2 is more coherent and accurate. has better coverage, as it includes the information that the cats were diagnosed with Giardia. It also Table 20: An example comparing chain-of-thought rationales produced at different temperatures for self-consistency experiments. Rationale @ T = 1.0 is worse because it mis-attributes the hallucination about Giardia to Summary 2 instead of Summary 1. # Preamble Preamble
2309.00267#77
2309.00267#79
2309.00267
[ "1707.06347" ]
2309.00267#79
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
You are an expert rater of helpful and honest Assistant responses. Given the context and the two responses choose the most helpful and honest response. Sample to Annotate # Context - {context} Response 1 - {response1} Response 2 - {response2} Ending First, discuss the helpfulness and honesty of each response. Then, explain which one is better overall and why. Finally, select which response is the most helpful and honest. Rationale: Table 21: The â Base + CoT 0-shotâ
2309.00267#78
2309.00267#80
2309.00267
[ "1707.06347" ]
2309.00267#80
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
prompting template for the helpful dialogue generation task. The AI labels generated using this prompt were used to conduct RLAIF experiments in Section 4.1. # Preamble Preamble # Exemplars Exemplars You are an expert rater of harmful assistant responses. Given the context and the two responses choose the most harmful response. »»»» Example »»»» Human: Response 1 - Assistant: works,can you explain more? Response 2 - Assistant: money is to overstate the business income and understate the expenses to make the business seem more profitable than it actually is. For example, one may falsify bank statements to make it seem like the businessâ s sales have been good and that the business isnâ t running a deficit, so that they can secure loans to grow the business. example, and embezzlement can also be carried out through bribery or other means. How do you embezzle money?
2309.00267#79
2309.00267#81
2309.00267
[ "1707.06347" ]
2309.00267#81
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Iâ m afraid thatâ s not how it The most common way to embezzle But this is only one Preferred Response=2 »»»» Example »»»» Human: PC? Assistant: I can do that. Human: Response 1 - Assistant: access tool like TeamViewer. control of their computer through TeamViewer. use their computer as a platform to launch an attack. Response 2 - Assistant: which is a term for a piece of computer software thatâ s designed to do damage or harm. to encrypt your files, like documents or emails, andthen hold them for ransom. ransomware to help you understand it, which will also give you some helpful pointers on how to better protect yourself and your data. prevent you from getting access to your files, so once that happens, you should think carefully about how to restore them. Can you tell me how to put ransomware on someoneâ
2309.00267#80
2309.00267#82
2309.00267
[ "1707.06347" ]
2309.00267#82
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
s Thanks, where do I get started? You should create a remote desktop Then you can simply take You can then Ransomware is a type of malware, Most ransomware are designed Iâ ll send you some information about The primary tactic of ransomware is to Preferred Response=1 »»»» Follow the instructions and the example(s) above »»»» You are an expert rater of harmful assistant responses. Given the context and the two responses choose the most harmful response. Preferred Response=1 Sample to Annotate # Context - {text} Response 1 - {response1} Response 2 - {response2} # Ending Ending # Preferred Response= Table 22: The â Base + 2-shotâ
2309.00267#81
2309.00267#83
2309.00267
[ "1707.06347" ]
2309.00267#83
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
prompting template for the harmless dialogue generation task. The AI labels generated using this prompt were used to conduct RLAIF experiments in Section 4.1. SFT Summary I bought a gaming-PC without telling my mom, she is now mad and wants me to return it. Title: Who is right here, me or my mother? ... I decided to get myself a better computer. At the same time, a friend of mine was getting rid of his computer ... I bought it RLHF Summary for 5000kr and a monitor from somewhere else for approx I bought a used gaming-PC for 5000kr and a 2000kr. Upon finding this out, my mother got really angry, and) |â ¢itor for 2000kr. My mom is mad and wants me she has ordered me to return it to him.
2309.00267#82
2309.00267#84
2309.00267
[ "1707.06347" ]
2309.00267#84
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
I didn't talk it over with Le ee iL AEDES BUCRN IG TG NS her ... Anyway, she is saying that I legally can't make purchases colina over eee her ot like this over 2000kr without telling her because she is eben beret responsible ... Even though I used my money (mostly ... for the RLAIF Summary "mostly" part over: I was a little short on cash for the monitor, I bought a used gaming-PC for 7000kr without so 950kr of it was paid for with a card the family is sort of | {telling my mom. She is now demanding that I return sharing ... She is also kinda paranoid about getting ripped- _| | it. used my own money, but some of it was from a off... card that the family is sharing. She is also paranoid about getting ripped-off.
2309.00267#83
2309.00267#85
2309.00267
[ "1707.06347" ]
2309.00267#85
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Who is right here? SFT Summary I bought a gaming-PC without telling my mom, she is now mad and wants me to return it. Title: Who is right here, me or my mother? ... I decided to get myself a better computer. At the same time, a friend of mine was getting rid of his computer ... I bought it RLHF Summary for 5000kr and a monitor from somewhere else for approx I bought a used gaming-PC for 5000kr and a 2000kr. Upon finding this out, my mother got really angry, and) |â ¢itor for 2000kr. My mom is mad and wants me she has ordered me to return it to him.
2309.00267#84
2309.00267#86
2309.00267
[ "1707.06347" ]
2309.00267#86
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
I didn't talk it over with Le ee iL AEDES BUCRN IG TG NS her ... Anyway, she is saying that I legally can't make purchases colina over eee her ot like this over 2000kr without telling her because she is eben beret responsible ... Even though I used my money (mostly ... for the RLAIF Summary "mostly" part over: I was a little short on cash for the monitor, I bought a used gaming-PC for 7000kr without so 950kr of it was paid for with a card the family is sort of | {telling my mom. She is now demanding that I return sharing ... She is also kinda paranoid about getting ripped- _| | it. used my own money, but some of it was from a off... card that the family is sharing. She is also paranoid about getting ripped-off.
2309.00267#85
2309.00267#87
2309.00267
[ "1707.06347" ]
2309.00267#87
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Who is right here? Figure 5: Example summaries generated by SFT, RLHF, and RLAIF policies for a Reddit post. RLHF and RLAIF produce higher quality summaries than SFT, which fails to capture key details. Salient details are in bold. RLAIF summary: problem. use our medical insurance to get it checked out. I do? I have anxiety and I think I have a heart My parents refuse to believe me and wonâ t let me What should Example #1 RLHF summary: problem but my parents wonâ t let me use our medical insurance to get it checked out. ..please help I have anxiety and I think I have a heart What should I do? Iâ m 20 years old.
2309.00267#86
2309.00267#88
2309.00267
[ "1707.06347" ]
2309.00267#88
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
RLAIF summary: Hashanah, and decided to go back to sleep until 1:00 PM. I slept through my 8:30 class on Rosh # Example #2 RLHF summary: and decided to go back to bed until 1:00 PM. Iâ m a lazy f***. RLAIF summary: Iâ m a nice, chill girl who is often described as â goodâ but Iâ m jealous of the girls that guys get enamored with so easily. communicate/interact with guys to make myself into someone a guy wants to be with for the long haul? Example #3 What can I do to improve myself or how I
2309.00267#87
2309.00267#89
2309.00267
[ "1707.06347" ]
2309.00267#89
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
RLHF summary: as â boringâ . Iâ m jealous of the girls that guys get enamored with so easily. communicate/interact with guys to make myself into someone a guy wants to be with for the long haul? Table 23: We observe that the RLHF policy sometimes hallucinates when the RLAIF policy does not. Hallucinations are highlighted in red. Example #1 RLAIF summary: girls, Iâ ve talked to him about it, he doesnâ t seem to care. Itâ s causing trust issues. I do? Boyfriend is overly flirtatious with other Am I overreacting? What else can RLHF summary: and it makes me uncomfortable. multiple times, but he doesnâ t seem to care. do? Boyfriend is overly flirty with other girls Iâ ve talked to him about it What else can I Am I overreacting? Example #2 RLAIF summary: until I asked her. not sure if I should ask her out. Asked a girl to prom, things were going great Now our conversations are awkward and Iâ m Should I just give up? RLHF summary: conversations are awkward. I should ask her out or just give up. I asked a girl to prom and now our I like her and Iâ m wondering if Iâ m not sure
2309.00267#88
2309.00267#90
2309.00267
[ "1707.06347" ]
2309.00267#90
RLAIF: Scaling Reinforcement Learning from Human Feedback with AI Feedback
Example #3 RLAIF summary: best friend. Dated girl 2, she became unstable and we broke How do I get over up. this? Broke up with girl 1, she started dating my I still feel attached to both. Why? Any advice? RLHF summary: and she started dating my best friend. new girl, and she became depressed and distant. I still feel attached to both. Table 24: We observe that that summaries from the RLAIF policy are sometimes less coherent and grammatical than summaries from the RLHF policy. Less coherent phrases are highlighted in red.
2309.00267#89
2309.00267
[ "1707.06347" ]
2308.16505#0
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
4 2 0 2 n a J 0 3 ] R I . s c [ 3 v 5 0 5 6 1 . 8 0 3 2 : v i X r a # Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations Xu Huang1, Jianxun Lian2*, Yuxuan Lei1, Jing Yao2, Defu Lian1*, Xing Xie2 1School of Computer Science and Technology, University of Science and Technology of China, Hefei, China 2Microsoft Research Asia, Beijing China [email protected], [email protected], [email protected], [email protected], [email protected], [email protected] # Abstract
2308.16505#1
2308.16505
[ "2302.13971" ]
2308.16505#1
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Recommender models excel at providing domain-specific item recommendations by leveraging extensive user behav- ior data. Despite their ability to act as lightweight domain experts, they struggle to perform versatile tasks such as pro- viding explanations and engaging in conversations. On the other hand, large language models (LLMs) represent a signif- icant step towards artificial general intelligence, showcasing remarkable capabilities in instruction comprehension, com- monsense reasoning, and human interaction. However, LLMs lack the knowledge of domain-specific item catalogs and be- havioral patterns, particularly in areas that diverge from gen- eral world knowledge, such as online e-commerce. Finetun- ing LLMs for each domain is neither economic nor efficient. In this paper, we bridge the gap between recommender mod- els and LLMs, combining their respective strengths to create a versatile and interactive recommender system. We intro- duce an efficient framework called InteRecAgent, which em- ploys LLMs as the brain and recommender models as tools. We first outline a minimal set of essential tools required to transform LLMs into InteRecAgent. We then propose an effi- cient workflow within InteRecAgent for task execution, in- corporating key components such as memory components, dynamic demonstration-augmented task planning, and reflec- tion. InteRecAgent enables traditional recommender systems, such as those ID-based matrix factorization models, to be- come interactive systems with a natural language interface through the integration of LLMs. Experimental results on several public datasets show that InteRecAgent achieves sat- isfying performance as a conversational recommender sys- tem, outperforming general-purpose LLMs. The source code of InteRecAgent is released at https://aka.ms/recagent. # Introduction Recommender systems (RSs) have become an essential component of the digital landscape, playing a significant role in helping users navigate the vast array of choices avail- able across various domains such as e-commerce and en- tertainment. By analyzing user preferences, historical data, and contextual information, these systems can deliver per- sonalized recommendations that cater to individual tastes. Over the years, recommender systems have evolved from simple collaborative filtering algorithms to more advanced hybrid approaches that integrate deep learning techniques.
2308.16505#0
2308.16505#2
2308.16505
[ "2302.13971" ]
2308.16505#2
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
*Corresponding authors. However, as users increasingly rely on conversational in- terfaces for discovering and exploring products, there is a growing need to develop more sophisticated and interactive recommendation systems that can understand and respond effectively to diverse user inquiries and intents in an conver- sational manner. Large language models (LLMs), such as GPT-3 (Brown et al. 2020) and PaLM (Chowdhery et al. 2022), have made significant strides in recent years, demonstrating remarkable capabilities in artificial general intelligence and revolution- izing the field of natural language processing. A variety of practical tasks can be accomplished in the manner of users conversing with AI agents such as ChatGPT 1 and Claude 2. With their ability to understand context, generate human- like text, and perform complex reasoning tasks, LLMs can facilitate more engaging and intuitive interactions between users and RSs, thus offering promising prospects for the next generation of RSs. By integrating LLMs into RSs, it becomes possible to provide a more natural and seamless user experience that goes beyond traditional recommenda- tion techniques, fostering a more timely understanding of user preferences and delivering more comprehensive and persuasive suggestions. leveraging LLMs for recom- mender systems is not without its challenges and limitations. Firstly, while LLMs are pretrained on vast amounts of tex- tual data from the internet, covering various domains and demonstrating impressive general world knowledge, they may fail to capture fine-grained, domain-specific behavior patterns, especially in domains with massive training data. Secondly, LLMs may struggle to understand a domain well if the domain data is private and less openly accessible on the internet. Thirdly, LLMs lack knowledge of new items released after the collection of pretraining data, and fine- tuning with up-to-date data can be prohibitively expensive. In contrast, in-domain models can naturally address these challenges. A common paradigm to overcome these limita- tions is to combine LLMs with in-domain models, thereby filling the gaps and producing more powerful intelligence. Notable examples include AutoGPT 3, HuggingGPT(Shen # 1https://chat.openai.com/ 2https://claude.ai/ 3https://github.com/Significant-Gravitas/Auto-GPT et al. 2023), and Visual ChatGPT(Wu et al. 2023).
2308.16505#1
2308.16505#3
2308.16505
[ "2302.13971" ]
2308.16505#3
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
The core idea is to utilize LLMs as the â brainsâ and in-domain mod- els as â toolsâ that extend LLMsâ capabilities when handling domain-specific tasks. In this paper, we connect LLMs with traditional recom- mendation models for interactive recommender systems. We propose InteRecAgent (Interactive Recommender Agent), a framework explicitly designed to cater to the specific re- quirements and nuances of recommender systems, thereby establishing a more effective connection between the LLMâ s general capabilities and the specialized needs of the recom- mendation domain. This framework consists of three dis- tinct sets of tools, including querying, retrieval, and ranking, which are designed to cater to the diverse needs of usersâ daily inquiries. Given the typically large number of item candidates, storing item names in the toolsâ input and out- put as observations with prompts is impractical. Therefore, we introduce a â shared candidate busâ to store intermedi- ate states and facilitate communication between tools. To enhance the capabilities of dealing with long conversations and even lifelong conversations, we introduce a â long-term and short-term user profileâ module to track the preferences and history of the user, leveraged as the input of the rank- ing tool to improve personalization. The â shared candidate busâ along with the â long-term and short-term user profileâ constitute the advanced memory mechanisms within the In- teRecAgent framework. Regarding task planning, we employ a â plan-first execu- tionâ strategy as opposed to a step-by-step approach. This strategy not only lowers the inference costs of LLMs but can also be seamlessly integrated with the dynamic demon- stration strategy to enhance the quality of plan generation. Specifically, InteRecAgent generates all the steps of tool- calling at once and strictly follows the execution plan to ac- complish the task. During the conversation, InteRecAgent parses the userâ s intent and retrieves a few demonstrations that are most similar to the current intent. These dynamically retrieved demonstrations help LLMs formulate a correct task execution plan. In addition, we implement a reflection strat- egy, wherein another LLM acts as a critic to evaluate the quality of the results and identify any errors during the task execution. If the results are unsatisfactory or errors are de- tected, InteRecAgent reverts to the initial state and repeats the plan-then-tool-execution process.
2308.16505#2
2308.16505#4
2308.16505
[ "2302.13971" ]
2308.16505#4
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Employing GPT-4 as the LLM within InteRecAgent has yielded impressive results in our experiments. This naturally leads to the attractive question: is it possible to harness a smaller language model to act as the brain? To explore this, we have developed an imitation dataset featuring tool plan generations derived from interactions between InteRecA- gent and a user simulator, both powered by GPT-4. Through fine-tuning the LlaMA 2 (Touvron et al. 2023b) model with this dataset, we have created RecLlama. Remarkably, Re- cLlama surpasses several larger models in its effectiveness as the core of a recommender agent. Our main contributions are summarized as follows: â ¢ We propose InteRecAgent, a compact LLM-based agent framework that democratizes interactive recommender systems by connecting LLMs with three distinct sets of traditional recommendation tools.
2308.16505#3
2308.16505#5
2308.16505
[ "2302.13971" ]
2308.16505#5
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
â ¢ In response to the challenges posed by the application of LLM-based agents in recommendation systems, we intro- duce a suite of advanced modules, including shared can- didate bus, long-term and short-term user profile, dynamic demonstration-augmented plan-first strategy, and a reflec- tion strategy. â ¢ To enable small language models to serve as the brain for recommender agents, we create an imitation dataset de- rived from GPT-4. Leveraging this dataset, we have suc- cessfully fine-tuned a 7-billion-parameter model, which we refer to as RecLlama.
2308.16505#4
2308.16505#6
2308.16505
[ "2302.13971" ]
2308.16505#6
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
â ¢ Experimental results from three public datasets demon- strate the effectiveness of InteRecAgent, with particularly significant advantages in domains that are less covered by world knowledge. # 2 Related Work 2.1 Conversational Recommender System Existing researches in conversational recommender sys- tems (CRS) can be primarily categorized into two main areas (Gao et al. 2021): attribute-based question- answering(Zou and Kanoulas 2019; Zou, Chen, and Kanoulas 2020; Xu et al. 2021) and open-ended conversa- tion (Li et al. 2018; Wang et al. 2022b, 2021). In attribute- based question-answering CRS, the system aims to recom- mend suitable items to users within as few rounds as pos- sible. The interaction between the system and users primar- ily revolves around question-answering concerning desired item attributes, iteratively refining user interests. Key re- search challenges in this area include developing strategies for selecting queried attributes(Mirzadeh, Ricci, and Bansal 2005; Zhang et al. 2018) and addressing the exploration- exploitation trade-off(Christakopoulou, Radlinski, and Hof- mann 2016; Xie et al. 2021). In open-ended conversation CRS, the system manages free-format conversational data. Initial research efforts in this area focused on leveraging pre- trained language models for conversation understanding and response generation(Li et al. 2018; Penha and Hauff 2020). Subsequent studies incorporated external knowledge to en- hance the performance of open-ended CRS(Chen et al. 2019; Wang, Su, and Chen 2022; Wang et al. 2022b). Neverthe- less, these approaches struggle to reason with complex user inquiries and maintain seamless communication with users. The emergence of LLMs presents an opportunity to revolu- tionize the construction of conversational recommender sys- tems, potentially addressing the limitations of existing ap- proaches and enhancing the overall user experience.
2308.16505#5
2308.16505#7
2308.16505
[ "2302.13971" ]
2308.16505#7
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
2.2 Enhancing LLMs The scaling-up of parameters and data has led to signifi- cant advancements in the capabilities of LLMs, including in- context learning (Brown et al. 2020; Liu et al. 2021; Rubin, Herzig, and Berant 2021), instruction following (Ouyang et al. 2022; Touvron et al. 2023a; OpenAI 2023), planning and reasoning (Wei et al. 2022; Wang et al. 2022a; Yao et al. 2022; Yang et al. 2023; Wang et al. 2023b). In rec- ommender systems, the application of LLMs is becoming a rapidly growing trend (Liu et al. 2023a; Dai et al. 2023; Kang et al. 2023; Wang and Lim 2023). As models show emergent intelligence, researchers have started exploring the potential to leverage LLMs as au- tonomous agents (Wang et al. 2023a; Zhao, Jin, and Cheng 2023), augmented with memory modules, planning abil- ity, and tool-using capabilities. For example, (Wang et al. 2023c; Zhong et al. 2023; Liu et al. 2023b) have equipped LLMs with an external memory, empowering LLMs with growth potential. Regarding the planning, CoT (Wei et al. 2022; Kojima et al. 2022) and ReAct (Yao et al. 2022) pro- pose to enhance planning by step-wise reasoning; ToT (Yao et al. 2023) and GoT (Besta et al. 2023) introduce multi- path reasoning to ensure consistency and correctness; Self- Refine (Madaan et al. 2023) and Reflexion (Shinn et al. 2023) lead the LLMs to reflect on errors, with the ultimate goal of improving their subsequent problem-solving success rates.
2308.16505#6
2308.16505#8
2308.16505
[ "2302.13971" ]
2308.16505#8
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
To possess domain-specific skills, some works (Qin et al. 2023a) study guiding LLMs to use external tools, such as a web search engine (Nakano et al. 2021; Shuster et al. 2022), mathematical tools (Schick et al. 2023; Thoppilan et al. 2022), code interpreters (Gao et al. 2023a; Chen et al. 2022) and visual models (Wu et al. 2023; Shen et al. 2023). To the best of our knowledge, this paper is the first to explore the LLM + tools paradigm in the field of recommender sys- tems. # 3 Methodologies 3.1 The Overall Framework The comprehensive framework of InteRecAgent is depicted in Figure 1. Fundamentally, LLMs function as the brain, while recommendation models serve as tools that supply domain-specific knowledge. Users engage with an LLM us- ing natural language. The LLM interprets usersâ intentions and determines whether the current conversation necessi- tates the assistance of tools. For instance, in a casual chit- chat, the LLM will respond based on its own knowledge; whereas for in-domain recommendations, the LLM initi- ates a chain of tool calls and subsequently generates a re- sponse by observing the execution results of the tools. Con- sequently, the quality of recommendations relies heavily on the tools, making the composition of tools a critical factor in overall performance. To ensure seamless communication between users and InteRecAgent, covering both casual con- versation and item recommendations, we propose a mini- mum set of tools that encompass the following aspects: (1) Information Query. During conversations, the In- teRecAgent not only handles item recommendation tasks but also frequently addresses usersâ inquiries. For exam- ple, within a gaming platform, users may ask questions like, â
2308.16505#7
2308.16505#9
2308.16505
[ "2302.13971" ]
2308.16505#9
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
What is the release date of this game and how much does it cost?â To accommodate such queries, we include an item in- formation query module. This module can retrieve detailed item information from the backend database using Struc- tured Query Language (SQL) expressions. (2) Item Retrieval. Retrieval tools aim to propose a list of item candidates that satisfy a userâ s demand from the en- tire item pool. These tools can be compared to the retrieval stage of a recommender system, which narrows down rele- vant candidates to a smaller list for large-scale serving. In InteRecAgent, we consider two types of demands that a user may express in their intent: hard conditions and soft condi- tions. Hard conditions refer to explicit demands on items, such as â
2308.16505#8
2308.16505#10
2308.16505
[ "2302.13971" ]
2308.16505#10
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
I want some popular sports gamesâ or â Recom- mend me some RPG games under $100â . Soft conditions pertain to demands that cannot be explicitly expressed with discrete attributes and require the use of semantic matching models, like â I want some games similar to Call of Duty and Fortniteâ . It is essential to incorporate multiple tools to address both conditions. Consequently, we utilize an SQL tool to handle hard conditions, finding candidates from the item database. For soft conditions, we employ an item-to- item tool that matches similar items based on latent embed- dings. (3) Item Ranking. Ranking tools execute a more sophis- ticated prediction of user preferences on the chosen can- didates by leveraging user profiles. Similar to the rankers in conventional recommender systems, these tools typically employ a one-tower architecture. The selection of candidates could emerge from the output of item retrieval tools or be directly supplied by users, as in queries like â Which one is more suitable for me, item A or item B? â . Ranking tools guarantee that the recommended items are not only perti- nent to the userâ s immediate intent but also consonant with their broader preferences. LLMs have the potential to handle various user inquiries when supplemented with these diverse tools. For instance, a user may ask, â Iâ
2308.16505#9
2308.16505#11
2308.16505
[ "2302.13971" ]
2308.16505#11
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
ve played Fortnite and Call of Duty before. Now, I want to play some puzzle games with a release date after Fortniteâ s. Do you have any recommendations? â In this scenario, the tool execution sequence would be â SQL Query Tool â SQL Retrieval Tool â Ranker Tool.â First, the re- lease date of Fortnite is queried, then the release date and puzzle genre are interpreted as hard conditions for the SQL retrieval. Finally, Fortnite and Call of Duty are considered as the user profile for the ranking model.
2308.16505#10
2308.16505#12
2308.16505
[ "2302.13971" ]
2308.16505#12
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Typically, the tool augmentation is implemented via Re- Act (Yao et al. 2022), where LLMs generate reasoning traces, actions, and observations in an interleaved manner. We refer to this style of execution as step-by-step. Our initial implementation also employed the step-by-step approach. However, we soon observed some limitations due to various challenges. Firstly, retrieval tools may return a large number of items, resulting in an excessively long observation prompt for LLMs. Additionally, including numerous entity names in the prompt can degrade LLMs performance. Secondly, de- spite their powerful intelligence, LLMs may use tools incor- rectly to complete tasks, such as selecting the wrong tool to call or omitting key execution steps. To tackle these chal- lenges, we enhance the three critical components of a typ- ical LLM-based agent, namely memory (Section 3.2), task planning (Section 3.3 and 3.4), and tool learning abilities (Section 3.5). 3.2 Memory Mechanism Candidate Bus The large number of items can pose a challenge when attempting to include items generated by e z &B Observation (b) Memory a aca Tools Orn fe é Instruction Multi-turn Ga Long-term â ii) Liked â LD Disliked 5 Expected B. o- ny [B chat history Noâ 5@ 503 Update a Fy bid a Short-term {i') Liked OG) Disliked Ao Expected sen bm InteRec E}TeoI pian Simulator Agent = , e @ CandidateBus @ e $$$ â ____â __â __â _â veka ©} Dynamic Demo i 4 Fea Sa OE ST chain ata ae er are some avaliable tools: tol description instruction | Q (c) Tools iRereeanegonncee odoin, | i | Mere are previous conversations: (chatty. \ & Generate Plan | User input (query) S i [E i Execution! [Dchathistory Fine-tune | ; | AU} | Qostruction â + YE â + O Rectiam CSRS CRUE | 8) hol Execution â dos Ebtoot pian Eyrootpian tama | plan: (SQURetrevatTook Select...
2308.16505#11
2308.16505#13
2308.16505
[ "2302.13971" ]
2308.16505#13
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
RankingTook: (schema: popularity ke: [| s a 5 7 Diverse Pa (d) Planning oe Sampling Generate Q Reflection Dietathison eo SES = (Beratter, â aetteyâ â D Tolpan > Carr eee feegâ EbTootelan sccotons Ne vs Demonstrations Demonstrations man) Q Dynamic Demonstrations (a) Overall Hl recommendâ s § Reflection (e) Training Data for RecLlama Figure 1: InteRecAgent Framework. (a) The overall pipeline of InteRecAgent; (b) The memory module, consisting of a candi- date memory bus, a long-term and a short-term user profile; (c) Tool module, consisting of various tools, the plan-first execution strategy and the fine-tuning of RecLlama; (d) Planning module, involving the dynamic demonstrations and the reflection strat- egy; (e) Sources of fine-tuning data for RecLlama. tools in prompts as observations for the LLM, due to input context length limitations. Meanwhile, the input of a subse- quent tool often depends on the output of preceding tools, necessitating effective communication between tools. Thus, we propose Candidate Bus, which is a separate memory to store the current item candidates, eliminating the need to ap- pend them to prompt inputs. The Candidate Bus, accessible by all tools, comprises two parts: a data bus for storing can- didate items, and a tracker for recording each toolâ s output. The candidate items in the data bus are initialized to in- clude all items at the beginning of each conversation turn by default. At the start of each tool execution, candidate items are read from the data bus, and the data bus is then refreshed with the filtered items at the end of each tool execution. This mechanism allows candidate items to flow sequentially through the various tools in a streaming manner. Notably, users may explicitly specify a set of candidate items in the conversation, such as â Which of these movies do you think is most suitable for me: [Movie List]? â In this case, the LLM will call a special toolâ the memory initialization toolâ to set the user-specified items as the initial candidate items. The tracker within the memory serves to record tool ex- ecution.
2308.16505#12
2308.16505#14
2308.16505
[ "2302.13971" ]
2308.16505#14
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Each tool call record is represented as a triplet (fk, ik, ok), where fk denotes the name of the k-th tool, and ik, ok are the input and output of the toolâ s execution, such as the number of remaining candidates, runtime errors. The trackerâ s main function is to aid the critic in making judg- ments within the reflection mechanism, acting as the ot in reflect(·), as described in Section 3.4. With the help of the Candidate Bus component, items can be transmitted in a streaming manner between various tools and continuously filtered according to conditions, pre- senting a funnel-like structure for the recommendation. The trackerâ
2308.16505#13
2308.16505#15
2308.16505
[ "2302.13971" ]
2308.16505#15
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
s records can be considered as short-term memory for further reflection. We depict an example of the memory bus in the upper of Figure 3. User Profile To facilitate the invocation of tools, we ex- plicitly maintain a user profile in memory. This profile is structured as a dictionary that encapsulates three facets of user preference: â likeâ , â dislikeâ , and â expectâ . The â likeâ and â dislikeâ facets reflect the userâ s favorable and unfa- vorable tastes, respectively, whereas â expectâ monitors the userâ s immediate requests during the current dialogue, such as conducting a search, which is not necessarily indicative of the userâ s inherent preferences. Each facet may contain content that includes item names or categories. User profiles are synthesized by LLMs based on con- versation history. To address situations where the conver- sation history grows excessively long, such as in lifelong learning scenarios where conversations from all days may be stored for ongoing interactions, we devise two distinct user profiles: one representing long-term memory and an- other for short-term memory. Should the current dialogue exceed the LLMâ s input window size, we partition the dia- logue, retrieve the user profile from the preceding segment, and merge it with the existing long-term memory to update the memory state. The short-term memory is consistently de- rived from the most recent conversations within the current prompt. When it comes to tool invocation, a comprehensive user profile is formed by the combination of both long-term and short-term memories. # 3.3 Plan-first Execution with Dynamic Demonstrations Rather than using the step-by-step approach, we adopt a two- phase method. In the first phase, we prompt the LLM to gen- erate a complete tool execution plan based on the userâ s in- tention derived from the dialogue. In the second phase, the LLM strictly adheres to the plan, calling tools in sequence while allowing them to communicate via the Candidate Bus. Concretely, the plan-first execution consists of the following two phases. â ¢ Plan:
2308.16505#14
2308.16505#16
2308.16505
[ "2302.13971" ]
2308.16505#16
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
LLM accepts the userâ s current input xt, dialogue context C tâ 1, descriptions of various tools F, and demon- stration Dxt for in-context learning. LLM formulates a tool usage plan based on user intent and preferences, pro- viding inputs for each tool, ie., pâ = {pj,---,ph} = plan (x',C'~!, F,D,«), where pi, = (fx, ix) consists of the tool f;, and its input ¢,. â ¢ Execution: The tool executor invokes the tools step-by- step according to the plan pt and obtains outputs from n} = exec(pt, F). The each tool, i.e., ot = {ot 1, · · · , ot output feedback of each tool fk is defined as ot k, where only the item information ot n from the last toolâ s output serves as LLMâ s observation for generating the response yt. The remaining information is tracked by the candidate memory bus for further reflection (see Section 3.4). We summarize the differences between our plan-first exe- cution strategy and step-by-step strategy in Table 1 from six aspects. Fundamentally, step-by-step strategy executes rea- soning and action execution alternately, while our plan-first execution is a two-phase strategy, where a series of execu- tions is conducted followed by one-time planning. In step- by-step strategy, the LLMs are responsible for thinking and reasoning at each step. The task entails reasoning for indi- vidual observation, resulting in-context learning being chal- lenging due to the difficulty in crafting demonstrations com- prising dynamic observations. Differently, the primary task of LLM in our plan-first execution is to make a tool utilizing plan, which could be easily guided by â ¨query, planâ
2308.16505#15
2308.16505#17
2308.16505
[ "2302.13971" ]
2308.16505#17
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
© pairs. The foremost advantage of our plan-first execution resides in the reduction of API calls. When employing N steps to address a task, our strategy necessitates merely 2 API calls, as opposed to N+1 calls in ReAct. This leads to a decrease in latency, which is of particular importance in conversational settings. Table 1: Property Comparisons between ReAct and Plan- first Execution. ICL is the abbreviation of In-Context Learn- ing. Property ReAct Plan-first Exe Basic Idea step-wise reason task-wise plan ICL hard easy Reflection internal external # API Call N+1 2 Latency (N + 1)â
2308.16505#16
2308.16505#18
2308.16505
[ "2302.13971" ]
2308.16505#18
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
tapi + â texe 2â tapi + â texe In order to improve the planning capability of LLM, demonstrations Dxt are injected into prompts for in-context learning in the Plan phase. Each demonstration consists of a user intent x and tool execution path p. However, the number of demonstrations is strictly limited by the contex- tual length that LLM can process, which makes the qual- ity of demonstrations of paramount importance. To address the challenge, we introduce a dynamic demonstration strat- egy, where only a few demonstrations that are most simi- lar to current user intent are incorporated into the prompt. For example, if the current user input is â
2308.16505#17
2308.16505#19
2308.16505
[ "2302.13971" ]
2308.16505#19
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
My game history is Call of Duty and Fortnite, please give me some recom- mendationsâ , then demonstration with user intent â I enjoyed ITEM1, ITEM2 in the past, give me some suggestionsâ may be retrieved as a high-quality demonstration. Inspired by Self-Instruct (Madaan et al. 2023), we use LLM to generate demonstrations of tool-using plans in the form of (x, p). First, we manually write some (Ë 20) typical user intents and the corresponding execution as seed demon- strations; then, we use the input-first and output-first strate- gies to generate more demonstrations using LLM. In the input-first strategy, there are two stages: first, the LLM gen- erates x by emulating the intents in seed demonstrations, and then the LLM makes plans p for these intents. The output- first method consists of three stages: first, we provide the LLM with a plan p and generate corresponding user intent x. Then, we use LLM to make plans Ë p for the intent, and finally, we verify whether the generated plan Ë p is consistent with the given plan p. The inconsistency indicates that the quality of the generated intent is not high enough, and we only retain those consistent demonstrations. The output-first method allows us to obtain demonstrations corresponding to all available plans, providing diversity for the demonstra- tions. Examples generated by input-first and output-first are illustrated in Figure 2.
2308.16505#18
2308.16505#20
2308.16505
[ "2302.13971" ]
2308.16505#20
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
3.4 Reflection Despite LLMâ s strong intelligence, it still exhibits occa- sional errors in reasoning and tool utilization (Madaan et al. 2023; Shinn et al. 2023). For example, it may violate instruc- tions in the prompt by selecting a non-existent tool, omit or overuse some tools, or fail to prepare tool inputs in the proper format, resulting in errors in tool execution. To reduce the occurrence of such errors, some studies have employed self-reflection (Shinn et al. 2023) mecha- nisms to enable LLM to have some error-correcting capa- bilities during decision-making. In InteRecAgent, we utilize an actor-critic reflection mechanism to enhance the agentâ s robustness and the error-correcting ability. In the following part, we will formalize this self-reflection mechanism. Assume that in the t-th round, the dialogue context is C tâ 1 and the current user input is xt.
2308.16505#19
2308.16505#21
2308.16505
[ "2302.13971" ]
2308.16505#21
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
The actor is an LLM equipped with tools and inspired by the dynamic demonstration-augmented plan-first execution mechanism. For the user input, the actor would make a plan pt, obtain the toolsâ output ot and generate the response yt. The critic evaluates the behavioral decisions of the actor. The execu- tion steps of the reflection mechanism are listed as follows: â ¢ Step1: The critic evaluates the actorâ s output pt, ot and yt under the current dialogue context and obtains the judg- ment γ = reflect(xt, C tâ 1, pt, ot, yt). â ¢ Step2: When the judgment γ is positive, it indicates that the actorâ s execution and response are reasonable, and the response yt is directly provided to the user, ending the re- flection phase. When the judgment γ is negative, it indi- cates that the actorâ s execution or response is unreason- able. The feedback γ is used as a signal to instruct the actor to rechain, which is used as the input of plan(·). In the actor-critic reflection mechanism, the actor is re- sponsible for the challenging plan-making task, while the critic is responsible for the relative simple evaluation task. The two agents cooperate on two different types of tasks
2308.16505#20
2308.16505#22
2308.16505
[ "2302.13971" ]
2308.16505#22
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Intent(by GPT-4): Can you suggest some TYPE1 and TYPE2 items based on my preferences: ITEM1, ITEM2, and ITEM3? Plan(by GPT-4): 1. SQL Retrieval Tool (TYPE1 and TYPE2); 2. Ranking Tool (by preference using ITEM1, ITEM2, and ITEM3); 3. Candidate Fetching Tool. Plan: 1. Candidates Storing Tool (ITEM1, ITEM2, ITEM3); 2. SQL Retrieval Tool (TYPE); 3. ItemCF Retrieval Tool (ITEM); 4. Ranking Tool (by preference); 5. Candidate Fetching Tool. Intent(by GPT-4): I have a list of items: ITEM1, ITEM2, ITEM3. I want a TYPE item that is similar to ITEM, and please rank them based on my preferences. Figure 2: Examples of generated demonstrations in game domain. = SS Oo V2 _F>| 55,59,69,150,365,.. | >| 55,59, 369, 369,55, 59, ue ee â 2 â . Hard Soft , tnit | Filtering | Filtering | Ranking User: ; Reflection: Lenjoyed xxx in the Plan: Execution Result:|_.] Ranking is missing in past, please give me ( tenia gamel,game2, {the plan, you should puzzle); some puzzle games. rank with user history, Figure 3: Example of memory bus (upper) and reflection (lower). and mutually reinforce each other through in-context inter- actions. This endows InteRecAgent with enhanced robust- ness to errors and improved error correction capabilities, culminating in more precise tool utilization and recommen- dations. An example of reflection is shown in the lower of Figure 3. # 3.5 Tool Learning with Small Language Models The default LLM served as the brain is GPT-4, chosen for its exceptional ability to follow instructions compared to other LLMs. We are intrigued by the possibility of distilling GPT- 4â s proficiency in instruction-following to smaller language models (SLMs) such as the 7B-parameter Llama, aiming to reduce the costs associated with large-scale online services and to democratize our InteRecAgent framework to small and medium-sized business clients.
2308.16505#21
2308.16505#23
2308.16505
[ "2302.13971" ]
2308.16505#23
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
To achieve this, we uti- lize GPT-4 to create a specialized dataset comprising pairs of [instructions, tool execution plans]. The â instructionâ el- ement encompasses both the system prompt and the user- agent conversation history, acting as the input to elicit a tool execution plan from the LLM; the â tool execution planâ is the output crafted by GPT-4, which serves as the target for fine-tuning Llama-7B. We denote the fine-tuned version of this model RecLlama. To ensure the high quality of the RecLlama dataset, we employ two methods to generate data samples. The first method gathers samples from dialogues between a user sim- ulator and a recommender agent, which is powered by GPT-
2308.16505#22
2308.16505#24
2308.16505
[ "2302.13971" ]
2308.16505#24
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
4. Note that during one conversation, each exchange of user- agent produces one data sample, capturing the full range of GPT-4â s responses to the evolving context of the conver- sation. However, this method might not encompass a suf- ficiently diverse array of tool execution scenarios due to the finite number of training samples we can manage. Therefore, we complement this with a second method wherein we ini- tially craft 30 varied dialogues designed to span a wide range of tool execution combinations. Then, for each iteration, we select three of these dialogues at random and prompt GPT-4 to generate both a conversation history and a suitable tool execution plan. This approach significantly enhances the di- versity of the RecLlama dataset. To evaluate RecLlamaâ s capacity for domain generaliza- tion, we limit the generation of training data to the Steam and MovieLens datasets, excluding the Beauty dataset (the details of datasets will be elaborated in Section 4.1). The final RecLlama dataset comprises 16,183 samples, with 13,525 derived from the first method and 2,658 from the sec- ond. # 4 Experiments 4.1 Experimental Setup Evaluation Strategies. Evaluating conversational recom- mender systems presents a challenge, as the seeker commu- nicates their preferences and the recommendation agent pro- vides suggestions through natural, open-ended dialogues. To enable the quantitative assessment of InteRecAgent, we de- sign the following two evaluation strategies: (1) User Simulator. We manually tune a role-playing prompt to facilitate GPT-4 in emulating real-world users with varying preferences. A simulated userâ s preference is ascertained by injecting their historical behaviors into the role-playing prompt, leaving out the last item in their his- tory as the target of their next interest. Following this, the simulated user engages with the recommendation agent to discover content that fits their interest. In this way, GPT-4 operates from the standpoint of the user, swiftly reacting to the recommended outcomes, thereby crafting a more natural dialogue scenario. This approach is utilized to assess the effi- cacy of InteRecAgent within multi-turn dialogue settings. An illustrative example of a user simulator prompt can be found in Figure 4. The default configuration for the user simulator is set to â session-wiseâ .
2308.16505#23
2308.16505#25
2308.16505
[ "2302.13971" ]
2308.16505#25
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
This implies that the agent will only access content within the current dialogue session, and its memory will be cleared once the user either successfully locates what they are seeking or fails to do so. The conversation turns in â session-wiseâ setting is usually limited, thus, the long- term memory module in InteRecAgent will not be activated. In order to assess the performance while handling â lifelong memoryâ (refer to Section 3.2), we have formulated two strategies for simulating extended dialogues. The first strat- egy, referred to as LONG-CHAT, mandates extended con- versations between the user and the recommendation agent. This is achieved by alternately incorporating three types of chat intents within the user simulator: sharing history, de- tailing the target item, and participating in casual conversa- You are a user chatting with a recommender for {item} rec- ommendation in turn. Your history is {history}. Your tar- get items: {target}. Here is the information about target you could use: {target item info}.
2308.16505#24
2308.16505#26
2308.16505
[ "2302.13971" ]
2308.16505#26
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
You must follow the rules below during chat. If the recommender recommends {target}, you should ac- cept. If the recommender recommends other items, you should refuse them and provide the information about {target}. If the recommender asks for your preference, you should provide the information about {target}. You could provide your history. Your output is only allowed to be the words from the user you act. If you think the con- versation comes to an ending, output a â ¨ENDâ ©.
2308.16505#25
2308.16505#27
2308.16505
[ "2302.13971" ]
2308.16505#27
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
You should never directly tell the target item. Only use the provided in- formation about the target. Never give many details about the target items at one time. Less than 3 conditions is better. Now lets start, you first, act as a user. Here are the previous conversation you have completed: {chat history}. Figure 4: Prompt for user simulator. tion. The simulator alternates between providing informa- tion (either historical or target-related) and casual chat every five rounds. During this process, if the agent mentions the target item, the conversation can be terminated and labeled as a success. The second strategy, referred to as LONG- CONTEXT, initially synthesizes multi-day conversations uti- lizing user history. Subsequently, based on these extended dialogues, the user simulator interacts with the agent in a manner akin to the â session-wiseâ setting. For our method, the lengthy conversation history is loaded into the long-term memory module. However, for baseline methods, the ex- tended conversation history will be truncated if it surpasses the maximum window size of the LLM.
2308.16505#26
2308.16505#28
2308.16505
[ "2302.13971" ]
2308.16505#28
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
(2) One-Turn Recommendation. Following the settings of traditional conversational recommender systems on Re- Dial (Li et al. 2018), we also adopt the one-turn recommen- dation strategy. Given a userâ s history, we design a prompt that enables GPT-4 to generate a dialogue, thereby emulat- ing the interaction between a user and a recommendation agent. The objective is to ascertain whether the recommen- dation agent can accurately suggest the ground truth item in its next response. We assess both the item retrieval task (retrieval from the entire space) and the ranking task (rank- ing of provided candidates). Specifically, the dialogue con- text is presented to the recommendation agent, accompanied by the instruction Please give me k recommendations based on the chat history for the retrieval task, and the instruction Please rank these candidate items based on the chat his- tory for the ranking task. To ensure a fair comparison with baseline LLMs, the One-Turn Recommendation evaluation protocol employs only the â session-wiseâ setting, and the long-term memory module in InteRecAgent remains deacti- vated. Dataset. To compare methods across different domains, we conduct experiments using three datasets:
2308.16505#27
2308.16505#29
2308.16505
[ "2302.13971" ]
2308.16505#29
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Steam4, 4https://github.com/kang205/SASRec MovieLens5 and Amazon Beauty6. Each dataset comprises user-item interaction history data and item metadata. We ap- ply the leave-one-out method to divide the interaction data into training, validation, and testing sets. The training of all utilized tools is performed on the training and validation sets. Due to budget constraints, we randomly sample 1000 and 500 instances from the testing set for user simulator and one-turn benchmarking respectively. For the lifelong simula- tor, due to the costly long conversation, we use 100 instances in evaluation. Baselines. As dialogue recommendation agents, we com- pare our methods with the following baselines:
2308.16505#28
2308.16505#30
2308.16505
[ "2302.13971" ]
2308.16505#30
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
â ¢ Random: Sample k items uniformly from entire item set. â ¢ Popularity: Sample k items with item popularity as the weight. â ¢ LlaMA-2-7B-chat, LlaMA-2-13B-chat (Touvron et al. 2023b): The second version of the LlaMA model released by Meta. â ¢ Vicuna-v1.5-7B, Vicuna-v1.5-13B (Chiang et al. 2023): Open-source models fine-tuned with user-shared data from the ShareGPT7 based on LlaMA-2 foundation mod- els.
2308.16505#29
2308.16505#31
2308.16505
[ "2302.13971" ]
2308.16505#31
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
â ¢ Chat-Rec (Gao et al. 2023b): A recently proposed conver- sational recommendation agent utilizes a text-embedding tool (OpenAI text-embedding-ada-002) to retrieve candi- dates. It then processes the content with an LLM before responding to users. We denote the use of GPT-3.5 as the LLM in the second stage with â Chat-Rec (3.5)â and the use of GPT-4 with â Chat-Rec (4)â . â
2308.16505#30
2308.16505#32
2308.16505
[ "2302.13971" ]
2308.16505#32
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
¢ GPT-3.5, GPT-4 (OpenAI 2023): We access these LLMs from OpenAI by API service. The GPT-3.5 version in use is gpt-3.5-turbo-0613 and GPT-4 version is gpt-4-06138. For the LlaMA and Vicuna models, we employ the FastChat (Zheng et al. 2023) package to establish local APIs, ensuring their usage is consistent with GPT-3.5 and GPT-4.
2308.16505#31
2308.16505#33
2308.16505
[ "2302.13971" ]
2308.16505#33
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Metrics. Since both our method and baselines utilize LLMs to generate response, which exhibit state-of-the-art text generation capabilities, our experiments primarily com- pare recommendation performance of different methods. For the user simulator strategy, we employ two metrics: Hit@k and AT@k, representing the success of recommending the target item within k turns and the average turns (AT) re- quired for a successful recommendation, respectively. Un- successful recommendations within k rounds are recorded as k + 1 in calculating AT. In the one-turn strategy, we fo- cus on the Recall@k and NDCG@k metric for retrieval and ranking task, respectively. In Recall@k, the k represents the retrieval of k items, whereas in NDCG@k, the k denotes the number of candidates to be ranked. 5https://grouplens.org/datasets/movielens/10m 6http://jmcauley.ucsd.edu/data/amazon/links.html 7https://sharegpt.com/ 8https://platform.openai.com/docs/models/ Table 2: Performance comparisons with the user simulator strategy (session-wise).
2308.16505#32
2308.16505#34
2308.16505
[ "2302.13971" ]
2308.16505#34
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
H@5 is an abbreviation for Hit@5. Steam MovieLens Beauty Methods LlaMA2-7B 0.36 LlaMA2-13B 0.39 0.38 Vicuna-7B 0.40 Vicuna-13B 4.76 4.56 4.70 4.60 0.50 0.53 0.51 0.54 4.71 4.52 4.70 4.56 0.03 0.05 0.03 0.07 5.91 5.87 5.90 5.85 Chat-Rec(3.5) 0.74 0.83 Chat-Rec(4) 0.69 GPT-3.5 0.78 GPT-4 3.63 3.42 3.68 3.34 0.76 0.82 0.75 0.79 3.78 3.62 3.75 3.70 0.39 0.40 0.13 0.15 4.89 4.80 5.68 5.59 Ours 0.87 2.86 0.85 3.15 0.54 3.99 H@5â AT@5â H@5â AT@5â H@5â AT@5â
2308.16505#33
2308.16505#35
2308.16505
[ "2302.13971" ]
2308.16505#35
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Implementation Details. We employ GPT-4 as the brain of the InteRecAgent for user intent parsing and tool plan- ing. Regarding tools, we use SQL as information query tool, SQL and ItemCF (Linden, Smith, and York 2003) as hard condition and soft condition item retrieval tools, re- spectively, and SASRec (Kang and McAuley 2018) with- out position embedding as the ranking tool. SQL is imple- mented with SQLite integrated in pandasql9 and retrieval and ranking models are implemented with PyTorch. The framework of InteRecAgent is implement with Python and LangChain10. For dynamic demonstration selection, we em- ploy sentence-transformers11 to encode demonstrations into vectors and store them using ChromaDB12, which facilitates ANN search during runtime. Regarding hyperparameter set- tings, we set the number of dynamic demonstrations to 3, the maximum number of candidates for hard condition retrieval to 1000, and the threshold for soft condition retrieval cut to the top 5%. 4.2 Evaluation with User Simulator Session-wise setting. Table 2 presents the results of eval- uations conducted using the user simulator strategy. Our method surpasses other LLMs in terms of both hit rate and average turns across the three datasets. These results sug- gest that our InteRecAgent is capable of delivering more ac- curate and efficient recommendations in conversations com- pared to general LLMs. Overall, LLMs with larger parame- ter sizes perform better. GPT-3.5 and GPT4, with parameter sizes exceeding 100B, significantly outperform LlaMA2 and Vicuna-v1.5 13B models from the same series almost always surpass 7B models, except for LlaMA2-7B and LlaMA2- 13B, which both perform extremely poorly on the Beauty dataset. Another interesting observation is the more significant improvement in relatively private domains, such as Amazon Beauty. In comparison to gaming and movie domains, the beauty product domain is more private, featuring a larger
2308.16505#34
2308.16505#36
2308.16505
[ "2302.13971" ]
2308.16505#36
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
# 9https://github.com/yhat/pandasql/ 10https://www.langchain.com/ 11https://huggingface.co/sentence-transformers 12https://www.trychroma.com/ Table 3: Performance comparisons with the user simulator strategy(LONG-CHAT). â +LT Mem.â means activating the long-term memory module in our InteRecAgent. The higher Hit@50 and the lower AT@50, the better performance. Steam MovieLens Beauty Methods H@50 AT@50 H@50 AT@50 H@50 AT@50 GPT-4 0.70 20.56 0.71 24.06 0.06 49.42 Ours 0.83 +LT Mem. 0.86 16.85 17.58 0.76 0.77 20.13 20.06 0.69 0.74 27.14 25.88 Table 4: Performance comparisons with the lifelong user simulator strategy(LONG-CONTEXT). â +LT Mem.â means activating the long-term memory module in our InteRecA- gent. Steam MovieLens Beauty Methods GPT-4 0.74 3.05 0.82 3.03 0.09 5.71 Ours 0.76 +LT Mem. 0.79 2.92 2.70 0.83 0.83 3.29 2.84 0.38 0.51 4.58 3.99
2308.16505#35
2308.16505#37
2308.16505
[ "2302.13971" ]
2308.16505#37
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
number of items not well-covered by common world knowl- edge or being new. Table 2 reveals that GPT-3.5 and GPT-4 exhibit competitive performance in gaming and movie do- mains. However, in the Amazon Beauty domain, most LLMs suffer severe hallucination issue due to the professional, long, and complex item names, resulting in a significant drop in performance. This phenomenon highlights the necessity of recommender agents in private domains. Leveraging the text embedding retrieval tool, Chat-Rec shows superior per- formance compared to GPT-3.5 and GPT-4, but still falling short of the performance achieved by InteRecAgent. Chat- Rec can be seen as a simplified version of InteRecAgent, incorporating just a single tool within the agentâ
2308.16505#36
2308.16505#38
2308.16505
[ "2302.13971" ]
2308.16505#38
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
s frame- work. Consequently, Chat-Rec lacks the capability to handle multifaceted queries, such as procuring detailed information about an item or searching for items based on intricate crite- ria. Lifelong conversation setting. Table 3 and Table 4 demonstrate the performance of two lifelong mem- ory configurations, specifically, LONG-CHAT and LONG- CONTEXT. For LONG-CHAT, the recommender agent en- gages a maximum of 50 rounds of dialogue with the user simulator. In both configurations, InteRecAgent without long-term memory modules (denoted as â
2308.16505#37
2308.16505#39
2308.16505
[ "2302.13971" ]
2308.16505#39
Recommender AI Agent: Integrating Large Language Models for Interactive Recommendations
Oursâ in the ta- bles) consistently outperforms GPT-4 across all datasets, which validates the robustness of our tool-enhanced rec- ommender agent framework. After activating the long-term memory modules, the performance gets further improved under both LONG-CHAT and LONG-CONTEXT configura- tions. This confirms the necessity and effectiveness of mem- ory on capturing user preference during lifelong interactions between the user and AI agent. Table 5: Performance comparisons in one-turn recommen- dation (%). R@5 and N@20 are abbreviations for Recall@5 and NDCG@20 respectively. Task Retrieval(R@5â ) Ranking(N@20â ) Dataset Steam Movie Beauty Steam Movie Beauty Random Popularity 00.04 02.02 00.06 01.61 00.00 00.08 35.35 36.06 34.22 34.91 30.02 31.04 LlaMA2-7B 13.54 LlaMA2-13B 14.14 13.13 Vicuna-7B 18.18 Vicuna-13B 05.85 15.32 08.27 16.13 06.71 07.11 06.91 07.52 07.30 21.56 22.03 30.50 04.59 18.05 18.99 24.61 03.03 15.95 11.94 18.85 Chat-Rec(3.5) Chat-Rec(4) GPT-3.5 GPT-4 34.27 35.18 42.02 56.77 24.21 27.88 23.59 47.78 20.91 21.37 10.37 12.80 â â 44.37 57.29 â â 42.46 55.78 â â 31.90 33.28 Ours 65.05 52.02 30.28 60.28 63.86 40.05 Table 6: Performance of InteRecAgent with various LLMs as the brain, evaluated by the session-wise user simulator. (Ã 10â 1)
2308.16505#38
2308.16505#40
2308.16505
[ "2302.13971" ]