doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2308.10053
45
# 5.3 Limitations of LLMs as Zero-shot CRS Finding 10 - LLM recommendations suffer from popularity bias in CRS. Popularity bias refers to a phenomenon that popular items are recommended even more frequently than their popularity would warrant [8]. Figure 8 shows the popularity bias in LLM recommendations, though it may not be biased to the popular items in the target datasets. On ReDIAL, the most popular movies such as Avengers: Infinity War appear around 2% of the time over all ground-truth items; On Reddit, the most popular movies such as Everything Everywhere All at Once appears less than 0.3% of the time over ground-truth items. But for the generated recommendations from GPT-4 (other LLMs share a similar trend), 12We only use items that can be linked to ML-25M in this experiment. Here 63.32% items are linked using the links.csv file from ML-25M. Large Language Models as Zero-Shot Conversational Recommenders the most popular items such as The Shawshank Redemption appear around 5% times on ReDIAL and around 1.5% times on Reddit. Compared to the target datasets, LLMs recommendations are more concentrated on popular items, which may cause further issues like the bias amplification loop [8]. Moreover, the recommended popular items are similar across different datasets, which may reflect the item popularity in the pre-training corpus of LLMs.
2308.10053#45
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
46
[11] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182. [12] Joseph Konstan and Loren Terveen. 2021. Human-centered recommender systems: Origins, advances, challenges, and opportunities. AI Magazine 42, 3 (2021), 31–42. [13] Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization techniques for recommender systems. Computer 42, 8 (2009), 30–37. [14] Hoyeop Lee, Jinbae Im, Seongwon Jang, Hyunsouk Cho, and Sehee Chung. 2019. Melu: Meta-learned user preference estimator for cold-start recommendation. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining. 1073–1082.
2308.09904#46
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
46
Finding 11 - Recommendation performance of LLMs is sensi- tive to geographical regions. Despite the effectiveness in general, it is unclear whether LLMs can be good recommenders across vari- ous cultures and regions. Specifically, pre-trained language models’ strong open-domain ability can be attributed to pre-training from massive data [5]. But it also leads to LLMs’ sensitivity to data distri- bution. To investigate LLMs recommendation abilities for various regions, we take test instances from the Reddit dataset and obtain the production region of 7,476 movies from a publicly available movie dataset 13 by exact title matching, then report the Recall@1 for the linked movies grouped by region. We only report regions with more than 300 data points available to ensure enough data to support the result. As shown in Figure 9 the current best model, GPT-4’s performance on recommendation is higher for movies pro- duced in English-speaking regions. This could be due to bias in the training data - the left of Figure 9 show item on Reddit forums are dominated by movies from English-speaking regions. Such a result highlights large language model’s recommendation performance varies by region and culture and demonstrates the importance of cross-regional analysis and evaluation for language model-based conversational recommendation models. # 6 RELATED WORK
2308.10053#46
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
47
7 CONCLUSION AND FUTURE WORK From the perspective of humans, we introduce the RAH framework for recommendations, incorporating the design of the assistant using LLM Agents. Our experiments highlight the efficacy of the Learn-Act-Critic loop and reflection mechanisms in enabling the assistant to align more closely with user personalities. Besides, we evaluate the RAH framework on different recommender systems in reducing user burden and find the generalization capability of the framework, which echoes the non-invasion role of the assistant. Additionally, we measure the assistant’s capability to provide proxy feedback on unpopular items to mitigate selection bias. Finally, we explore potential solutions to increase user control of recommended results and personal privacy through the assistant. [15] Yujie Lin, Pengjie Ren, Zhumin Chen, Zhaochun Ren, Jun Ma, Maarten de Rijke, et al. 2018. Explainable fashion recommendation with joint outfit matching and comment generation. arXiv preprint arXiv:1806.08977 2 (2018).
2308.09904#47
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
47
Conversational Recommendation. Conversational recommender systems (CRS) aim to understand user preferences and provide per- sonalized recommendations through conversations. Typical tradi- tional CRS setups include template-based CRS [13, 26, 37, 38, 70] and critiquing-based CRS [9, 42, 67]. More recently, as natural lan- guage processing has advanced, the community developed "deep" CRS [10, 41, 64] that support interactions in natural language. Aside from collaborative filtering signals, prior work shows that CRS models benefit from various additional information. Examples in- clude knowledge-enhanced models [10, 74] that make use of ex- ternal knowledge bases [1, 47], review-aware models [49], and session/sequence-based models [43, 76]. Presently, UniCRS [64], a model built on DialoGPT [69] with prompt tuning [4], stands as the state-of-the-art approach on CRS datasets such as ReDIAL [41] and INSPIRED [22]. Currently, by leveraging LLMs, [16] proposes a new CRS pipeline but does not provide quantitative results, and [63] proposes better user simulators
2308.10053#47
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
48
[16] Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is chatgpt a good recommender? a preliminary study. arXiv preprint arXiv:2304.10149 (2023). [17] Qijiong Liu, Jieming Zhu, Quanyu Dai, and Xiaoming Wu. 2022. Boosting deep ctr prediction with a plug-and-play pre-trainer for news recommendation. In Proceedings of the 29th International Conference on Computational Linguistics. 2823–2833. [18] Ruibo Liu, Ruixin Yang, Chenyan Jia, Ge Zhang, Denny Zhou, Andrew M Dai, Diyi Yang, and Soroush Vosoughi. 2023. Training Socially Aligned Language Models in Simulated Human Society. arXiv preprint arXiv:2305.16960 (2023). [19] Weiming Liu, Xiaolin Zheng, Mengling Hu, and Chaochao Chen. 2022. Collab- orative filtering with attribution alignment for review-based non-overlapped cross domain recommendation. In Proceedings of the ACM Web Conference 2022. 1181–1190.
2308.09904#48
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
48
Currently, by leveraging LLMs, [16] proposes a new CRS pipeline but does not provide quantitative results, and [63] proposes better user simulators to improve evaluation strategies in LLMs. Unlike those papers, we uncover a repeated item shortcut in the previous evaluation protocol, and propose a framework where LLMs serve as zero-shot CRS with detailed analyses to support our findings from both model and data perspectives.
2308.10053#48
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
49
[20] Sean M McNee, John Riedl, and Joseph A Konstan. 2006. Being accurate is not enough: how accuracy metrics have hurt recommender systems. In CHI’06 extended abstracts on Human factors in computing systems. 1097–1101. One constraint of our current approach is its reliance on offline evaluations. In the future, we plan to conduct online assessments of the RAH framework, focusing on the sustained influence of the assistant on users and recommender systems. Moreover, we will explore the collaborative relationship between the assistant and humans, such as whether personalities learned from subjective tasks like recommendations can be translated into content creation scenarios that align with user preferences. REFERENCES [1] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. Tallrec: An effective and efficient tuning framework to align large language model with recommendation. arXiv preprint arXiv:2305.00447 (2023).
2308.09904#49
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
49
Large Language Models. Advances in natural language process- ing (NLP) show that large language models (LLMs) exhibit strong 13https://www.kaggle.com/datasets/rounakbanik/the-movies-dataset CIKM ’23, October 21–25, 2023, Birmingham, United Kingdom generalization ability towards unseen tasks and domains [5, 12, 65]. In particular, existing work reveals language models’ performance and sample efficiency on downstream tasks can be improved sim- ply through scaling up their parameter sizes [35]. Meanwhile, lan- guage models could further generalize to a wide range of unseen tasks by instruction tuning, learning to follow task instructions in natural language [52, 57]. Following these advances, many works successfully deploy large language models to a wide range of down- stream tasks such as question answering, numerical reasoning, code generation, and commonsense reasoning without any gradient up- dates [5, 35, 44, 72]. Recently, there have been various attempts by the recommendation community to leverage large language mod- els for recommendation, this includes both adapting architectures used by large language models [14, 19] and repurposing existing LLMs for recommendation [39, 48, 62]. However, to our best knowl- edge, we are the first work that provides a systematic quantitative analysis of LLMs’ ability on conversational recommendation.
2308.10053#49
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
50
[2] Stephen Bonner and Flavian Vasile. 2018. Causal embeddings for recommendation. In Proceedings of the 12th ACM conference on recommender systems. 104–112. [3] Chong Chen, Min Zhang, Yongfeng Zhang, Yiqun Liu, and Shaoping Ma. 2020. Efficient neural matrix factorization without sampling for recommendation. ACM Transactions on Information Systems (TOIS) 38, 2 (2020), 1–28. [21] Lin Ning, Steve Chien, Shuang Song, Mei Chen, Yunqi Xue, and Devora Berlowitz. 2022. EANA: Reducing privacy risk on large-scale recommendation models. In Proceedings of the 16th ACM Conference on Recommender Systems. 399–407. [22] Joon Sung Park, Joseph C O’Brien, Carrie J Cai, Meredith Ringel Morris, Percy Liang, and Michael S Bernstein. 2023. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442 (2023). [23] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995–1000.
2308.09904#50
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
50
7 CONCLUSION AND DISCUSSION We investigate Large Language Models (LLMs) as zero-shot Conver- sational Recommendation Systems (CRS). Through our empirical investigation, we initially address a repetition shortcut in previous standard CRS evaluations, which can potentially lead to unreliable conclusions regarding model design. Subsequently, we demonstrate that LLMs as zero-shot CRS surpass all fine-tuned existing CRS mod- els in our experiments. Inspired by their effectiveness, we conduct a comprehensive analysis from both the model and data perspectives to gain insights into the working mechanisms of LLMs, the charac- teristics of typical CRS tasks, and the limitations of using LLMs as CRS directly. Our experimental evaluations encompass two publicly available datasets, supplemented by our newly-created dataset on movie recommendations collected by scraping a popular discussion website. This dataset is the largest public CRS dataset and ensures more diverse and realistic conversations for CRS research. We also discuss the future directions based on our findings in this section.
2308.10053#50
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
51
[23] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995–1000. [24] Scott Sanner, Krisztian Balog, Filip Radlinski, Ben Wedin, and Lucas Dixon. 2023. Large Language Models are Competitive Near Cold-start Recommenders for Language-and Item-based Preferences. arXiv preprint arXiv:2307.14225 (2023). [25] Tobias Schnabel, Adith Swaminathan, Ashudeep Singh, Navin Chandak, and Thorsten Joachims. 2016. Recommendations as treatments: Debiasing learning and evaluation. In international conference on machine learning. PMLR, 1670– 1679. [26] Donghee Shin. 2020. How do users interact with algorithm recommender sys- tems? The interaction of users, algorithms, and performance. Computers in human behavior 109 (2020), 106344. [27] Piotr Sulikowski, Tomasz Zdziebko, Dominik Turzyński, and Eliasz Kańtoch. 2018. Human-website interaction monitoring in recommender systems. Procedia Computer Science 126 (2018), 1587–1596.
2308.09904#51
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
51
On LLMs. Given the remarkable performance even without fine- tuning, LLMs hold great promise as an effective approach for CRS tasks by offering superior content/contextual knowledge. The en- couraging performance from the open-sourced LLMs [11, 68] also opens up the opportunities to further improve CRS performance via efficient tuning [3, 28] and collaborative filtering [36] ensembling. Meanwhile, many conventional tasks, such as debiasing [8] and trustworthy [17] need to be revisited in the context of LLMs. On CRS. Our findings suggest the systematic re-benchmarking of more CRS models to understand their recommendation abilities and the characteristics of CRS tasks comprehensively. Gaining a deeper understanding of CRS tasks also requires new datasets from diverse sources e.g., crowd-sourcing platforms [22, 41], discussion forums, and realistic CRS applications with various domains, languages, and cultures. Meanwhile, our analysis of the information types uncovers the unique importance of the superior content/context knowledge in LLMs for CRS tasks; this distinction also sets CRS tasks apart from traditional recommendation settings and urges us to explore the interconnections between CRS tasks and traditional recommendation [21] or conversational search [2] tasks. CIKM ’23, October 21–25, 2023, Birmingham, United Kingdom
2308.10053#51
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
52
[28] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. 2023. Cognitive architectures for language agents. arXiv preprint arXiv:2309.02427 (2023). [4] Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1–39. [29] Kirsten Swearingen and Rashmi Sinha. 2001. Beyond algorithms: An HCI per- spective on recommender systems. In ACM SIGIR 2001 workshop on recommender systems, Vol. 13. 1–11. [5] Mukund Deshpande and George Karypis. 2004. Item-based top-n recommenda- tion algorithms. ACM Transactions on Information Systems (TOIS) 22, 1 (2004), 143–177. [6] Luke Friedman, Sameer Ahuja, David Allen, Terry Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, et al. 2023. Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023).
2308.09904#52
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
52
CIKM ’23, October 21–25, 2023, Birmingham, United Kingdom REFERENCES [1] Sören Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. In The Semantic Web: 6th International Semantic Web Conference, 2nd Asian Semantic Web Conference, ISWC 2007+ ASWC 2007, Busan, Korea, November 11-15, 2007. Proceedings. Springer, 722–735. [2] Payal Bajaj, Daniel Campos, Nick Craswell, Li Deng, Jianfeng Gao, Xiaodong Liu, Rangan Majumder, Andrew McNamara, Bhaskar Mitra, Tri Nguyen, Mir Rosenberg, Xia Song, Alina Stoica, Saurabh Tiwary, and Tong Wang. 2018. MS MARCO: A Human Generated MAchine Reading COmprehension Dataset. arXiv:1611.09268 [cs.CL]
2308.10053#52
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
53
[30] MLC team. 2023. MLC-LLM. https://github.com/mlc-ai/mlc-llm [31] Qi Wan, Xiangnan He, Xiang Wang, Jiancan Wu, Wei Guo, and Ruiming Tang. 2022. Cross pairwise ranking for unbiased item recommendation. In Proceedings of the ACM Web Conference 2022. 2370–2378. [32] Guanzhi Wang, Yuqi Xie, Yunfan Jiang, Ajay Mandlekar, Chaowei Xiao, Yuke Zhu, Linxi Fan, and Anima Anandkumar. 2023. Voyager: An open-ended embodied agent with large language models. arXiv preprint arXiv:2305.16291 (2023). XXX’24, 2024, Singapore [33] Lei Wang and Ee-Peng Lim. 2023. Zero-Shot Next-Item Recommendation using Large Pretrained Language Models. ArXiv abs/2304.03153 (2023). https://api. semanticscholar.org/CorpusID:257985012
2308.09904#53
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
53
[3] Keqin Bao, Jizhi Zhang, Yang Zhang, Wenjie Wang, Fuli Feng, and Xiangnan He. 2023. TALLRec: An Effective and Efficient Tuning Framework to Align Large Language Model with Recommendation. arXiv preprint arXiv:2305.00447 (2023). [4] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901.
2308.10053#53
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
54
[34] Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards unified conversational recommender systems via knowledge-enhanced prompt learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Dis- covery and Data Mining. 1929–1937. [35] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837. [36] Chuhan Wu, Fangzhao Wu, Tao Qi, and Yongfeng Huang. 2021. Empowering news recommendation with pre-trained language models. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval. 1652–1656. [37] Chuhan Wu, Fangzhao Wu, Tao Qi, Chao Zhang, Yongfeng Huang, and Tong Xu. 2022. Mm-rec: Visiolinguistic model empowered multimodal news recommenda- tion. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2560–2564.
2308.09904#54
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
54
[5] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. In Advances in Neural Information Processing Systems, H. Larochelle, M. Ran- zato, R. Hadsell, M.F. Balcan, and H. Lin (Eds.), Vol. 33. Curran Associates, Inc., 1877–1901. https://proceedings.neurips.cc/paper_files/paper/2020/file/ 1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf
2308.10053#54
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
55
[38] Zhiheng Xi, Wenxiang Chen, Xin Guo, Wei He, Yiwen Ding, Boyang Hong, Ming Zhang, Junzhe Wang, Senjie Jin, Enyu Zhou, et al. 2023. The rise and potential of large language model based agents: A survey. arXiv preprint arXiv:2309.07864 (2023). [39] Liu Yang, Junxue Zhang, Di Chai, Leye Wang, Kun Guo, Kai Chen, and Qiang Yang. 2022. Practical and Secure Federated Recommendation with Personalized Mask. In International Workshop on Trustworthy Federated Learning. Springer, 33–45. [40] Yang Yu, Fangzhao Wu, Chuhan Wu, Jingwei Yi, and Qi Liu. 2021. Tiny- newsrec: Effective and efficient plm-based news recommendation. arXiv preprint arXiv:2112.00944 (2021). [41] Tianzi Zang, Yanmin Zhu, Haobing Liu, Ruohan Zhang, and Jiadi Yu. 2022. A survey on cross-domain recommendation: taxonomies, methods, and future directions. ACM Transactions on Information Systems 41, 2 (2022), 1–39.
2308.09904#55
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
55
[6] Sébastien Bubeck, Varun Chandrasekaran, Ronen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, et al. 2023. Sparks of artificial general intelligence: Early experiments with gpt-4. arXiv preprint arXiv:2303.12712 (2023). [7] Nicola De Cao, Gautier Izacard, Sebastian Riedel, and Fabio Petroni. 2021. Autore- gressive Entity Retrieval. In International Conference on Learning Representations. https://openreview.net/forum?id=5k8F6UU39V [8] Jiawei Chen, Hande Dong, Xiang Wang, Fuli Feng, Meng Wang, and Xiangnan He. 2023. Bias and debias in recommender system: A survey and future directions. ACM Transactions on Information Systems 41, 3 (2023), 1–39.
2308.10053#55
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
56
[42] Gangyi Zhang. 2023. User-Centric Conversational Recommendation: Adapting the Need of User with Large Language Models. In Proceedings of the 17th ACM Conference on Recommender Systems. 1349–1354. [43] Qi Zhang, Jingjie Li, Qinglin Jia, Chuyuan Wang, Jieming Zhu, Zhaowei Wang, and Xiuqiang He. 2021. UNBERT: User-News Matching BERT for News Recom- mendation.. In IJCAI. 3356–3362. [44] Xinyang Zhang, Yury Malkov, Omar Florez, Serim Park, Brian McWilliams, Jiawei Han, and Ahmed El-Kishky. 2022. TwHIN-BERT: a socially-enriched pre- trained language model for multilingual Tweet representations. arXiv preprint arXiv:2209.07562 (2022). [45] Yongfeng Zhang, Xu Chen, et al. 2020. Explainable recommendation: A survey and new perspectives. Foundations and Trends® in Information Retrieval 14, 1 (2020), 1–101.
2308.09904#56
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
56
[9] Li Chen and Pearl Pu. 2012. Critiquing-based recommenders: survey and emerg- ing trends. User Modeling and User-Adapted Interaction 22 (2012), 125–150. [10] Qibin Chen, Junyang Lin, Yichang Zhang, Ming Ding, Yukuo Cen, Hongxia Yang, and Jie Tang. 2019. Towards Knowledge-Based Recommender Dialog System. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Pro- cessing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 1803–1813. [11] Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. 2023. Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality. https://lmsys.org/blog/2023-03-30-vicuna/
2308.10053#56
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
57
[46] Yu Zheng, Chen Gao, Xiang Li, Xiangnan He, Yong Li, and Depeng Jin. 2021. Disentangling user interest and conformity for recommendation with causal embedding. In Proceedings of the Web Conference 2021. 2980–2991. 8 APPENDICES 8.1 The statistics of datasets The number of users, items and interactions in different domains for both Cross1k and Cross221k. # Table 3: Cross1k. Domain #Users 1,045 Movie 1,046 Book 1,044 Game #Items 10,679 20,159 8,984 #Interactions 21,024 24,035 17,169 # 8.2 Expansion Experiments of Burden Reduction In our Section 5.2, we have compared the assistant’s generation of feedback on behalf of users in the Proxy Set, and then passed this feedback to the recommendation system to help users further opti- mize the recommendation system. From our previous results, it can Yubo Shu, et al. Table 4: Cross221k. Domain #Users 221,861 Movie 94,407 Book 7,149 Game #Items 49,791 12,898 12,196 #Interactions 2,313,890 2,240,010 71,003
2308.09904#57
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
57
[12] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Se- bastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam M. Shazeer, Vinodkumar Prabhakaran, Emily Reif, Nan Du, Benton C. Hutchinson, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier García, Vedant Misra, Kevin Robinson, Liam Fedus, Denny Zhou, Daphne Ip- polito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanu- malayan Sankaranarayana Pillai, Marie Pellat, Aitor
2308.10053#57
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
58
be seen that, with limited user interaction history and after learn- ing about the user’s personality, the assistant can effectively act on behalf of the user, optimizing various recommendation systems while reducing repetitive user operations. However, there might be a potential issue that predicting on the user’s Proxy Set could leak the data distribution. Therefore, we conducted additional experi- ments to investigate whether the assistant truly helps in reducing the user’s burden. In Table 5, we included an additional experiment: we used a program that randomly decides whether to like or dislike to sim- ulate a non-intelligent assistant. Experimental results show that even randomly guessing likes and dislikes on the proxy dataset can improve the effect of the recommendation system in most experi- ments, indicating potential data distribution leakage risks. However, overall, the assistant designed based on our method outperformed the random program. This further validates our findings that the as- sistant can indeed be relatively intelligent to help users more easily optimize the recommendation system through proxy feedback. RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents & XXX’24, 2024, Singapore # Table 5: The performance of proxying user feedback and adjusting recommender systems with the additional comparison.
2308.09904#58
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
58
Shivani Agrawal, Mark Omernick, Andrew M. Dai, Thanu- malayan Sankaranarayana Pillai, Marie Pellat, Aitor Lewkowycz, Erica Moreira, Rewon Child, Oleksandr Polozov, Katherine Lee, Zongwei Zhou, Xuezhi Wang, Brennan Saeta, Mark Díaz, Orhan Firat, Michele Catasta, Jason Wei, Kathleen S. Meier-Hellstern, Douglas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling Language Modeling with Pathways. ArXiv abs/2204.02311 (2022). [13] Konstantina Christakopoulou, Filip Radlinski, and Katja Hofmann. 2016. Towards conversational recommender systems. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. 815–824. [14] Zeyu Cui, Jianxin Ma, Chang Zhou, Jingren Zhou, and Hongxia Yang. 2022. M6- Rec: Generative Pretrained Language Models are Open-Ended Recommender Systems.
2308.10053#58
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
59
Method Movie Book Game Mixed NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 NDCG@10 Recall@10 LightGCN LightGCN-Random LightGCN-Assistant 0.5202 0.5341(+0.0139) 0.5524(+0.0322) 0.5142 0.5240(+0.0098) 0.5339(+0.0197) 0.1283 0.1527(+0.0244) 0.1830(+0.0547) 0.1439 0.1711(+0.0272) 0.1912(+0.0473) 0.3459 0.4163(+0.0704) 0.4330(+0.0871) 0.4309 0.4934(+0.0625) 0.4974(+0.0665) 0.3403 0.3790(+0.0387) 0.4058(+0.0655) 0.1696 0.1900(+0.0204) 0.2033(+0.0337) PLMRec PLMRec-Random PLMRec-Assistant 0.0993 0.1171(+0.0178) 0.1200(+0.0207)
2308.09904#59
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.09904
60
PLMRec PLMRec-Random PLMRec-Assistant 0.0993 0.1171(+0.0178) 0.1200(+0.0207) 0.1316 0.1610(+0.0294) 0.1692(+0.0376) 0.0092 0.0149(+0.0057) 0.0162(+0.0070) 0.0143 0.0181(+0.0038) 0.0197(+0.0054) 0.3693 0.3964(+0.0271) 0.3981(+0.0288) 0.4630 0.4743(+0.0113) 0.4790(+0.0160) 0.1075 0.1346(+0.0271) 0.1378(+0.0303) 0.0656 0.0739(+0.0083) 0.0766(+0.0110) FM FM-Random FM-Assistant 0.3492 0.3897(+0.0405) 0.3919(+0.0427) 0.3871 0.4200(+0.0329) 0.4257(+0.0386) 0.1216 0.1443(+0.0227)
2308.09904#60
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
60
[15] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers). 4171–4186. He, et al. [16] Luke Friedman, Sameer Ahuja, David Allen, Terry Tan, Hakim Sidahmed, Changbo Long, Jun Xie, Gabriel Schubiner, Ajay Patel, Harsh Lara, et al. 2023. Leveraging Large Language Models in Conversational Recommender Systems. arXiv preprint arXiv:2305.07961 (2023). [17] Yingqiang Ge, Shuchang Liu, Zuohui Fu, Juntao Tan, Zelong Li, Shuyuan Xu, Yunqi Li, Yikun Xian, and Yongfeng Zhang. 2022. A survey on trustworthy recommender systems. arXiv preprint arXiv:2207.12515 (2022).
2308.10053#60
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
61
0.4200(+0.0329) 0.4257(+0.0386) 0.1216 0.1443(+0.0227) 0.1474(+0.0258) 0.1299 0.1561(+0.0262) 0.1603(+0.0304) 0.2917 0.2903(-0.0014) 0.2937(+0.0020) 0.3586 0.3529(-0.0057) 0.3624(+0.0038) 0.2421 0.2533(+0.0112) 0.2549(+0.0128) 0.1262 0.1336(+0.0074) 0.1340(+0.0078) MF MF-Random MF-Assistant 0.3737 0.4122(+0.0385) 0.4300(+0.0563) 0.4450 0.4714(+0.0264) 0.4781(+0.0331) 0.1143 0.1434(+0.0291) 0.1520(+0.0377) 0.1275 0.1484(+0.0209) 0.1593(+0.0318) 0.2074
2308.09904#61
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
61
[18] Robert Geirhos, Jörn-Henrik Jacobsen, Claudio Michaelis, Richard S. Zemel, Wieland Brendel, Matthias Bethge, and Felix Wichmann. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence 2 (2020), 665 – 673. [19] Shijie Geng, Shuchang Liu, Zuohui Fu, Yingqiang Ge, and Yongfeng Zhang. 2022. Recommendation as Language Processing (RLP): A Unified Pretrain, Personalized Prompt & Predict Paradigm (P5). In RecSys ’22: Sixteenth ACM Conference on Recommender Systems, Seattle, WA, USA, September 18 - 23, 2022, Jennifer Golbeck, F. Maxwell Harper, Vanessa Murdock, Michael D. Ekstrand, Bracha Shapira, Justin Basilico, Keld T. Lundgaard, and Even Oldridge (Eds.). ACM, 299–315. [20] Arnav Gudibande, Eric Wallace, Charlie Snell, Xinyang Geng, Hao Liu, Pieter Abbeel, Sergey Levine, and Dawn Song. 2023. The False Promise of Imitating Proprietary LLMs. arXiv:2305.15717 [cs.CL]
2308.10053#61
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
62
0.1275 0.1484(+0.0209) 0.1593(+0.0318) 0.2074 0.2618(+0.0544) 0.2998(+0.0924) 0.2622 0.3422(+0.0800) 0.3706(+0.1084) 0.1933 0.2302(+0.0369) 0.2651(+0.0718) 0.1054 0.1279(+0.0225) 0.1487(+0.0433) ENMF ENMF-Random ENMF-Assistant 0.4320 0.4931(+0.0611) 0.5200(+0.0880) 0.3953 0.4544(+0.0591) 0.4831(+0.0878) 0.0994 0.1195(+0.0201) 0.1224(+0.0230) 0.0997 0.1199(+0.0202) 0.1217(+0.0220) 0.0652 0.0751(+0.0099) 0.0788(+0.0136) 0.1036 0.1156(+0.0120) 0.1247(+0.0211)
2308.09904#62
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
62
[21] F. Maxwell Harper and Joseph A. Konstan. 2016. The MovieLens Datasets: History and Context. ACM Trans. Interact. Intell. Syst. 5 (2016), 19:1–19:19. [22] Shirley Anugrah Hayati, Dongyeop Kang, Qingxiaoyang Zhu, Weiyan Shi, and Zhou Yu. 2020. INSPIRED: Toward Sociable Recommendation Dialog Systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). 8142–8152. [23] Xiangnan He, Zhankui He, Xiaoyu Du, and Tat-Seng Chua. 2018. Adversarial personalized ranking for recommendation. In The 41st International ACM SIGIR conference on research & development in information retrieval. 355–364. [24] Xiangnan He, Lizi Liao, Hanwang Zhang, Liqiang Nie, Xia Hu, and Tat-Seng Chua. 2017. Neural collaborative filtering. In Proceedings of the 26th international conference on world wide web. 173–182.
2308.10053#62
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
63
0.0788(+0.0136) 0.1036 0.1156(+0.0120) 0.1247(+0.0211) 0.2630 0.3056(+0.0426) 0.3224(+0.0594) 0.1227 0.1446(+0.0219) 0.1531(+0.0304) NeuMF NeuMF-Random NeuMF-Assistant 0.4720 0.4464(-0.0256) 0.4856(+0.0136) 0.4878 0.4517(-0.0361) 0.4906(+0.0028) 0.1364 0.1559(+0.0195) 0.1631(+0.0267) 0.1385 0.1578(+0.0193) 0.1658(+0.0273) 0.2160 0.3301(+0.1141) 0.3507(+0.1347) 0.2704 0.3913(+0.1209) 0.4086(+0.1382) 0.2891 0.3220(+0.0329) 0.3451(+0.0560) 0.1507 0.1603(+0.0096)
2308.09904#63
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
63
[25] Zhankui He, Handong Zhao, Zhe Lin, Zhaowen Wang, Ajinkya Kale, and Julian McAuley. 2021. Locker: Locally constrained self-attentive sequential recommen- dation. In Proceedings of the 30th ACM International Conference on Information & Knowledge Management. 3088–3092. [26] Zhankui He, Handong Zhao, Tong Yu, Sungchul Kim, Fan Du, and Julian McAuley. 2022. Bundle MCR: Towards Conversational Bundle Recommendation. In Pro- ceedings of the 16th ACM Conference on Recommender Systems. 288–298. [27] Yupeng Hou, Junjie Zhang, Zihan Lin, Hongyu Lu, Ruobing Xie, Julian McAuley, and Wayne Xin Zhao. 2023. Large Language Models are Zero-Shot Rankers for Recommender Systems. arXiv preprint arXiv:2305.08845 (2023). [28] Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. 2021. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685 (2021).
2308.10053#63
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09904
64
0.3220(+0.0329) 0.3451(+0.0560) 0.1507 0.1603(+0.0096) 0.1742(+0.0235) ItemKNN ItemKNN-Random ItemKNN-Assistant 0.1211 0.1900(+0.0689) 0.2131(+0.0920) 0.1035 0.1698(+0.0663) 0.1860(+0.0825) 0.0889 0.1326(+0.0437) 0.1517(+0.0628) 0.0694 0.1051(+0.0357) 0.1171(+0.0477) 0.2242 0.2500(+0.0258) 0.2660(+0.0418) 0.3074 0.3035(-0.0039) 0.3125(+0.0051) 0.1657 0.2338(+0.0681) 0.2567(+0.0910) 0.0790 0.1090(+0.0300) 0.1170(+0.0380)
2308.09904#64
RAH! RecSys-Assistant-Human: A Human-Centered Recommendation Framework with LLM Agents
The rapid evolution of the web has led to an exponential growth in content. Recommender systems play a crucial role in Human-Computer Interaction (HCI) by tailoring content based on individual preferences. Despite their importance, challenges persist in balancing recommendation accuracy with user satisfaction, addressing biases while preserving user privacy, and solving cold-start problems in cross-domain situations. This research argues that addressing these issues is not solely the recommender systems' responsibility, and a human-centered approach is vital. We introduce the RAH Recommender system, Assistant, and Human) framework, an innovative solution with LLM-based agents such as Perceive, Learn, Act, Critic, and Reflect, emphasizing the alignment with user personalities. The framework utilizes the Learn-Act-Critic loop and a reflection mechanism for improving user alignment. Using the real-world data, our experiments demonstrate the RAH framework's efficacy in various recommendation domains, from reducing human burden to mitigating biases and enhancing user control. Notably, our contributions provide a human-centered recommendation framework that partners effectively with various recommendation models.
http://arxiv.org/pdf/2308.09904
Yubo Shu, Haonan Zhang, Hansu Gu, Peng Zhang, Tun Lu, Dongsheng Li, Ning Gu
cs.IR, cs.AI
null
null
cs.IR
20230819
20231017
[ { "id": "2305.07961" }, { "id": "2309.07864" }, { "id": "2303.14524" }, { "id": "2209.07562" }, { "id": "2305.16291" }, { "id": "2207.12515" }, { "id": "2304.03442" }, { "id": "2304.10149" }, { "id": "1806.08977" }, { "id": "2305.00447" }, { "id": "2309.02427" }, { "id": "2307.14225" }, { "id": "2112.00944" }, { "id": "2305.16960" } ]
2308.10053
64
[29] Harsh Jhamtani, Varun Gangal, Eduard Hovy, Graham Neubig, and Taylor Berg- Kirkpatrick. 2018. Learning to Generate Move-by-Move Commentary for Chess Games from Large-Scale Social Forum Data. In The 56th Annual Meeting of the Association for Computational Linguistics (ACL). Melbourne, Australia. [30] C Kim Jacob Hilton Jacob Menick Jiayi Weng Juan Felipe Ceron Uribe Liam Fedus Luke Metz Michael Pokorny Rapha Gontijo Lopes Sengjia Zhao John Schulman, Barret Zoph. 2022. Chatgpt: Optimizing language models for dialogue. OpenAI (2022). [31] Santosh Kabbur, Xia Ning, and George Karypis. 2013. Fism: factored item simi- larity models for top-n recommender systems. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining. 659– 667.
2308.10053#64
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
65
[32] Dongyeop Kang, Anusha Balakrishnan, Pararth Shah, Paul A Crook, Y-Lan Boureau, and Jason Weston. 2019. Recommendation as a Communication Game: Self-Supervised Bot-Play for Goal-oriented Dialogue. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). 1951–1961. [33] Wang-Cheng Kang and Julian McAuley. 2018. Self-attentive sequential recom- mendation. In 2018 IEEE international conference on data mining (ICDM). IEEE, 197–206. [34] Wang-Cheng Kang, Jianmo Ni, Nikhil Mehta, Maheswaran Sathiamoorthy, Lichan Hong, Ed Chi, and Derek Zhiyuan Cheng. 2023. Do LLMs Understand User Preferences? Evaluating LLMs On User Rating Prediction. arXiv preprint arXiv:2305.06474 (2023). [35] Jared Kaplan, Sam McCandlish, T. J. Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeff Wu, and Dario Amodei. 2020. Scaling Laws for Neural Language Models. ArXiv abs/2001.08361 (2020).
2308.10053#65
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
66
[36] Yehuda Koren, Robert Bell, and Chris Volinsky. 2009. Matrix factorization tech- niques for recommender systems. Computer 42, 8 (2009), 30–37. [37] Wenqiang Lei, Xiangnan He, Yisong Miao, Qingyun Wu, Richang Hong, Min- Yen Kan, and Tat-Seng Chua. 2020. Estimation-action-reflection: Towards deep interaction between conversational and recommender systems. In Proceedings of the 13th International Conference on Web Search and Data Mining. 304–312. Large Language Models as Zero-Shot Conversational Recommenders [38] Wenqiang Lei, Gangyi Zhang, Xiangnan He, Yisong Miao, Xiang Wang, Liang Chen, and Tat-Seng Chua. 2020. Interactive path reasoning on graph for conver- sational recommendation. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 2073–2083. [39] Jinming Li, Wentao Zhang, Tian Wang, Guanglei Xiong, Alan Lu, and Gerard Medioni. 2023. GPT4Rec: A Generative Framework for Personalized Recommen- dation and User Interests Interpretation. arXiv:2304.03879 [cs.IR]
2308.10053#66
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
67
[40] Ming Li, Sami Jullien, Mozhdeh Ariannezhad, and Maarten de Rijke. 2023. A next basket recommendation reality check. ACM Transactions on Information Systems 41, 4 (2023), 1–29. [41] Raymond Li, Samira Ebrahimi Kahou, Hannes Schulz, Vincent Michalski, Laurent Charlin, and Chris Pal. 2018. Towards deep conversational recommendations. Advances in neural information processing systems 31 (2018). [42] Shuyang Li, Bodhisattwa Prasad Majumder, and Julian McAuley. 2021. Self- Supervised Bot Play for Conversational Recommendation with Justifications. arXiv preprint arXiv:2112.05197 (2021). [43] Shuokai Li, Ruobing Xie, Yongchun Zhu, Xiang Ao, Fuzhen Zhuang, and Qing He. 2022. User-centric conversational recommendation with multi-aspect user modeling. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 223–233.
2308.10053#67
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
68
[44] Yujia Li, David H. Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom, Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de, Masson d’Autume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey, Cherepanov, James Molloy, Daniel Jaymin Mankowitz, Esme Sutherland Robson, Push- meet Kohli, Nando de, Freitas, Koray Kavukcuoglu, and Oriol Vinyals. 2022. Competition-level code generation with AlphaCode. Science 378 (2022), 1092 – 1097. [45] Yanran Li, Hui Su, Xiaoyu Shen, Wenjie Li, Ziqiang Cao, and Shuzi Niu. 2017. DailyDialog: A Manually Labelled Multi-turn Dialogue Dataset. In Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers). Asian Federation of Natural Language Processing, Taipei, Taiwan, 986–995. https://aclanthology.org/I17-1099
2308.10053#68
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
69
[46] Dawen Liang, Rahul G Krishnan, Matthew D Hoffman, and Tony Jebara. 2018. Variational autoencoders for collaborative filtering. In Proceedings of the 2018 world wide web conference. 689–698. [47] Hugo Liu and Push Singh. 2004. ConceptNet—a practical commonsense reasoning tool-kit. BT technology journal 22, 4 (2004), 211–226. [48] Junling Liu, Chao Liu, Renjie Lv, Kang Zhou, and Yan Zhang. 2023. Is ChatGPT a Good Recommender? A Preliminary Study. arXiv:2304.10149 [cs.IR] [49] Yu Lu, Junwei Bao, Yan Song, Zichen Ma, Shuguang Cui, Youzheng Wu, and Xiaodong He. 2021. RevCore: Review-Augmented Conversational Recommen- dation. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. 1161–1173.
2308.10053#69
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
70
[50] Wenchang Ma, Ryuichi Takanobu, and Minlie Huang. 2021. CR-Walker: Tree- Structured Graph Reasoning and Dialog Acts for Conversational Recommen- dation. In Proceedings of the 2021 Conference on Empirical Methods in Natu- ral Language Processing. Association for Computational Linguistics. https: //aclanthology.org/2021.emnlp-main.139 [51] OpenAI. 2023. GPT-4 Technical Report. arXiv:2303.08774 [cs.CL] [52] Long Ouyang, Jeffrey Wu, Xu Jiang, Diogo Almeida, Carroll L. Wainwright, Pamela Mishkin, Chong Zhang, Sandhini Agarwal, Katarina Slama, Alex Ray, John Schulman, Jacob Hilton, Fraser Kelton, Luke Miller, Maddie Simens, Amanda Askell, Peter Welinder, Paul F. Christiano, Jan Leike, and Ryan Lowe. 2022. Training language models to follow instructions with hu- man feedback. In NeurIPS. http://papers.nips.cc/paper_files/paper/2022/hash/ b1efde53be364a73914f58805a001731-Abstract-Conference.html
2308.10053#70
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
71
[53] Gustavo Penha and Claudia Hauff. 2020. What does bert know about books, movies and music? probing bert for conversational recommendation. In Proceed- ings of the 14th ACM Conference on Recommender Systems. 388–397. [54] Zhaochun Ren, Zhi Tian, Dongdong Li, Pengjie Ren, Liu Yang, Xin Xin, Huasheng Liang, Maarten de Rijke, and Zhumin Chen. 2022. Variational Reasoning about User Preferences for Conversational Recommendation. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 165–175. [55] Steffen Rendle. 2010. Factorization machines. In 2010 IEEE International conference on data mining. IEEE, 995–1000. [56] Alireza Salemi, Sheshera Mysore, Michael Bendersky, and Hamed Zamani. 2023. LaMP: When Large Language Models Meet Personalization. arXiv preprint arXiv:2304.11406 (2023).
2308.10053#71
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
72
[57] Victor Sanh, Albert Webson, Colin Raffel, Stephen H. Bach, Lintang Sutawika, Zaid Alyafeai, Antoine Chaffin, Arnaud Stiegler, Arun Raja, Manan Dey, M Saiful Bari, Canwen Xu, Urmish Thakker, Shanya Sharma Sharma, Eliza Szczechla, Taewoon Kim, Gunjan Chhablani, Nihal V. Nayak, Debajyoti Datta, Jonathan Chang, Mike Tian-Jian Jiang, Han Wang, Matteo Manica, Sheng Shen, Zheng Xin Yong, Harshit Pandey, Rachel Bawden, Thomas Wang, Trishala Neeraj, Jos Rozen, Abheesht Sharma, Andrea Santilli, Thibault Févry, Jason Alan Fries, Ryan Teehan, Teven Le Scao, Stella Biderman, Leo Gao, Thomas Wolf, and Alexander M. Rush. CIKM ’23, October 21–25, 2023, Birmingham, United Kingdom 2022. Multitask Prompted Training Enables Zero-Shot Task Generalization. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net. https://openreview.net/forum?id= 9Vrb9D0WI4
2308.10053#72
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
73
[58] Suvash Sedhain, Aditya Krishna Menon, Scott Sanner, and Lexing Xie. 2015. Autorec: Autoencoders meet collaborative filtering. In Proceedings of the 24th international conference on World Wide Web. 111–112. [59] Fei Sun, Jun Liu, Jian Wu, Changhua Pei, Xiao Lin, Wenwu Ou, and Peng Jiang. 2019. BERT4Rec: Sequential recommendation with bidirectional encoder rep- resentations from transformer. In Proceedings of the 28th ACM international conference on information and knowledge management. 1441–1450. [60] Yi Tay, Vinh Tran, Mostafa Dehghani, Jianmo Ni, Dara Bahri, Harsh Mehta, Zhen Qin, Kai Hui, Zhe Zhao, Jai Gupta, et al. 2022. Transformer memory as a differentiable search index. Advances in Neural Information Processing Systems 35 (2022), 21831–21843.
2308.10053#73
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
74
[61] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023). [62] Wenjie Wang, Xinyu Lin, Fuli Feng, Xiangnan He, and Tat-Seng Chua. 2023. Generative Recommendation: Towards Next-generation Recommender Paradigm. arXiv:2304.03516 [cs.IR] [63] Xiaolei Wang, Xinyu Tang, Wayne Xin Zhao, Jingyuan Wang, and Ji-Rong Wen. 2023. Rethinking the Evaluation for Conversational Recommendation in the Era of Large Language Models. arXiv preprint arXiv:2305.13112 (2023). [64] Xiaolei Wang, Kun Zhou, Ji-Rong Wen, and Wayne Xin Zhao. 2022. Towards Unified Conversational Recommender Systems via Knowledge-Enhanced Prompt Learning. In Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining. 1929–1937.
2308.10053#74
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
75
[65] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed Chi, Quoc V Le, and Denny Zhou. 2022. Chain-of-Thought Prompting Elicits Reasoning in Large Language Models. In Advances in Neural Information Processing Systems, S. Koyejo, S. Mohamed, A. Agar- wal, D. Belgrave, K. Cho, and A. Oh (Eds.), Vol. 35. Curran Associates, Inc., 24824–24837. https://proceedings.neurips.cc/paper_files/paper/2022/file/ 9d5609613524ecf4f15af0f7b31abca4-Paper-Conference.pdf [66] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837. [67] Ga Wu, Kai Luo, Scott Sanner, and Harold Soh. 2019. Deep language-based critiquing for recommender systems. In Proceedings of the 13th ACM Conference on Recommender Systems. 137–145.
2308.10053#75
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
76
[68] Canwen Xu, Daya Guo, Nan Duan, and Julian McAuley. 2023. Baize: An open- source chat model with parameter-efficient tuning on self-chat data. arXiv preprint arXiv:2304.01196 (2023). [69] Yizhe Zhang, Siqi Sun, Michel Galley, Yen-Chun Chen, Chris Brockett, Xiang Gao, Jianfeng Gao, Jingjing Liu, and William B Dolan. 2020. DIALOGPT: Large-Scale Generative Pre-training for Conversational Response Generation. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. 270–278. [70] Yiming Zhang, Lingfei Wu, Qi Shen, Yitong Pang, Zhihua Wei, Fangli Xu, Bo Long, and Jian Pei. 2022. Multiple Choice Questions based Multi-Interest Policy Learning for Conversational Recommendation. In Proceedings of the ACM Web Conference 2022. 2153–2162.
2308.10053#76
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
77
[71] Wayne Xin Zhao, Kun Zhou, Junyi Li, Tianyi Tang, Xiaolei Wang, Yupeng Hou, Yingqian Min, Beichen Zhang, Junjie Zhang, Zican Dong, et al. 2023. A survey of large language models. arXiv preprint arXiv:2303.18223 (2023). [72] Qinkai Zheng, Xiao Xia, Xu Zou, Yuxiao Dong, Shan Wang, Yufei Xue, Zihan Wang, Lei Shen, Andi Wang, Yang Li, Teng Su, Zhilin Yang, and Jie Tang. 2023. CodeGeeX: A Pre-Trained Model for Code Generation with Multilingual Evalua- tions on HumanEval-X. arXiv:2303.17568 [cs.LG] [73] Kun Zhou, Hui Wang, Wayne Xin Zhao, Yutao Zhu, Sirui Wang, Fuzheng Zhang, Zhongyuan Wang, and Ji-Rong Wen. 2020. S3-rec: Self-supervised learning for se- quential recommendation with mutual information maximization. In Proceedings of the 29th ACM international conference on information & knowledge management. 1893–1902.
2308.10053#77
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.10053
78
[74] Kun Zhou, Wayne Xin Zhao, Shuqing Bian, Yuanhang Zhou, Ji-Rong Wen, and Jingsong Yu. 2020. Improving conversational recommender systems via knowl- edge graph based semantic fusion. In Proceedings of the 26th ACM SIGKDD international conference on knowledge discovery & data mining. 1006–1014. [75] Kun Zhou, Yuanhang Zhou, Wayne Xin Zhao, Xiaoke Wang, and Ji-Rong Wen. 2020. Towards Topic-Guided Conversational Recommender System. In Proceed- ings of the 28th International Conference on Computational Linguistics. 4128–4139. [76] Jie Zou, Evangelos Kanoulas, Pengjie Ren, Zhaochun Ren, Aixin Sun, and Cheng Long. 2022. Improving conversational recommender systems via transformer- based sequential modelling. In Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval. 2319–2324.
2308.10053#78
Large Language Models as Zero-Shot Conversational Recommenders
In this paper, we present empirical studies on conversational recommendation tasks using representative large language models in a zero-shot setting with three primary contributions. (1) Data: To gain insights into model behavior in "in-the-wild" conversational recommendation scenarios, we construct a new dataset of recommendation-related conversations by scraping a popular discussion website. This is the largest public real-world conversational recommendation dataset to date. (2) Evaluation: On the new dataset and two existing conversational recommendation datasets, we observe that even without fine-tuning, large language models can outperform existing fine-tuned conversational recommendation models. (3) Analysis: We propose various probing tasks to investigate the mechanisms behind the remarkable performance of large language models in conversational recommendation. We analyze both the large language models' behaviors and the characteristics of the datasets, providing a holistic understanding of the models' effectiveness, limitations and suggesting directions for the design of future conversational recommenders
http://arxiv.org/pdf/2308.10053
Zhankui He, Zhouhang Xie, Rahul Jha, Harald Steck, Dawen Liang, Yesu Feng, Bodhisattwa Prasad Majumder, Nathan Kallus, Julian McAuley
cs.IR, cs.AI
Accepted as CIKM 2023 long paper. Longer version is coming soon (e.g., more details about dataset)
null
cs.IR
20230819
20230819
[ { "id": "2302.13971" }, { "id": "2304.03879" }, { "id": "2303.17568" }, { "id": "2305.07961" }, { "id": "2305.00447" }, { "id": "2305.08845" }, { "id": "2304.03516" }, { "id": "2303.08774" }, { "id": "2305.15717" }, { "id": "1611.09268" }, { "id": "2207.12515" }, { "id": "2303.18223" }, { "id": "2304.10149" }, { "id": "2305.13112" }, { "id": "2112.05197" }, { "id": "2305.06474" }, { "id": "2304.11406" }, { "id": "2205.08084" }, { "id": "2106.09685" }, { "id": "2303.12712" }, { "id": "2304.01196" } ]
2308.09583
0
3 2 0 2 g u A 8 1 ] L C . s c [ 1 v 3 8 5 9 0 . 8 0 3 2 : v i X r a # WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct # Haipeng Luo2∗ Qingfeng Sun1∗ Can Xu1† Pu Zhao1 Jianguang Lou1 Chongyang Tao1 Xiubo Geng1 Qingwei Lin1 Shifeng Chen2† Dongmei Zhang1 # 1Microsoft 2Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences {caxu,qins,puzhao,jlou,chotao,xigeng,qlin,dongmeiz}@microsoft.com {hp.luo,shifeng.chen}@siat.ac.cn # Abstract
2308.09583#0
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
0
3 2 0 2 g u A 0 3 ] L C . s c [ 3 v 2 6 6 9 0 . 8 0 3 2 : v i X r a # Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment Rishabh Bhardwaj‡, Soujanya Poria‡ ‡ DeCLaRe Lab, Singapore University of Technology and Design, Singapore [email protected] [email protected] § https://github.com/declare-lab/red-instruct https://huggingface.co/datasets/declare-lab/HarmfulQA https://huggingface.co/declare-lab/starling-7B Be warned that some of the examples in this paper are harmful and sensitive. #
2308.09662#0
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
0
3 2 0 2 v o N 4 2 ] L C . s c [ 3 v 7 8 6 9 0 . 8 0 3 2 : v i X r a # Graph of Thoughts: Solving Elaborate Problems with Large Language Models Maciej Besta1*, Nils Blach1*, Ales Kubicek1, Robert Gerstenberger1, Lukas Gianinazzi1, Joanna Gajda2, Tomasz Lehmann2, Michał Podstawski3, Hubert Niewiadomski2, Piotr Nyczyk2, Torsten Hoefler1 1ETH Zurich, 2Cledar, 3Warsaw University of Technology [email protected], [email protected], [email protected] # Abstract
2308.09687#0
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09583
1
# Abstract Large language models (LLMs), such as GPT-4, have shown remarkable perfor- mance in natural language processing (NLP) tasks, including challenging mathe- matical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open- source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM 3 and https://huggingface.co/WizardLM. # Introduction
2308.09583#1
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09687
1
# Abstract We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of- Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information gen- erated by an LLM as an arbitrary graph, where units of infor- mation (“LLM thoughts”) are vertices, and edges correspond to dependencies between these vertices. This approach en- ables combining arbitrary LLM thoughts into synergistic out- comes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transfor- mations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to hu- man thinking or brain mechanisms such as recurrence, both of which form complex networks. Website & code: https://github.com/spcl/graph-of-thoughts # 1 Introduction
2308.09687#1
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
1
# Abstract This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that ex- hibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three inte- gration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying de- grees of integration, makes use of chain-of-thought prompt- ing, and draws inspiration from augmented LLMs, the Com- mon Model of Cognition, and the simulation theory of cogni- tion. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the for- mation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic com- ponents. The neuro-symbolic approach, which takes inspira- tion from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic represen- tations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI sys- tems. We discuss the tradeoffs and challenges associated with each approach.
2308.09830#1
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
2
# Introduction Recently, Large-scale language models (LLMs) have garnered significant attention and become the go-to approach for numerous natural language processing (NLP) tasks, including open domain conversation [1–4], coding [5–13] and math [14–19]. A conspicuous example is ChatGPT, developed by OpenAI. This model uses extensive pre-training on large-scale internet data and further fine- tuning with specific instruction data and methods. As a result, it achieves state-of-the-art zero-shot performance on various benchmarks. Subsequently, Anthropic, Google, and Meta also launched their competitive products one after another. Notably, Meta’s series of Llama [4, 20] models have sparked an open-source revolution and quickly narrowed the gap with those closed-source LLMs. This trend also gradually stimulates the releases of MPT8, Falcon [21], StarCoder [12], Alpaca [22], Vicuna [23], and WizardLM [24], etc. However, these open models still struggles with the scenarios which require complex multi-step quantitative reasoning, such as solving mathematical and science challenges [25–35].
2308.09583#2
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
2
Larger language models (LLMs) have taken the world by storm with their mas- sive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scal- able deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompt- ing, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT—An approach for safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of
2308.09662#2
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
2
Website & code: https://github.com/spcl/graph-of-thoughts # 1 Introduction Large language models (LLMs) are taking over the world of AI. Recent years saw a rapid development of models pri- marily based on the decoder-only Transformer variant [65], such as GPT [13, 14, 53, 54], PaLM [19], or LLaMA [63]. Prompt engineering is a resource-efficient approach for solving different LLM tasks. In brief, one includes the task description within the input sent to an LLM. If this descrip- tion is appropriately formulated, the LLM solves the task using its autoregressive token-based mechanism for gener- ating text. Such prompts may contain example tasks with solutions (few-shot prompting, also referred to as in-context learning (ICL)), or even no example tasks at all (zero-shot prompting). In recent years it was shown that this mecha- nism can be used to solve a broad set of tasks that involve mathematical, commonsense, or symbolic reasoning.
2308.09687#2
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
2
Introduction Pre-trained Large Language Models (LLMs) like ChatGPT, GPT-4, and PaLM 2 are generative models that excel in a variety of natural language tasks (Brown et al. 2020; Devlin et al. 2019) and even show promise in interactive decision- making (Li et al. 2022), reasoning (Diao et al. 2023; Xie et al. 2023; Yao et al. 2023b), and modeling aspects of ar- tificial general intelligence (AGI) (Kosinski 2023; Bubeck et al. 2023). However, LLMs face interpretability, consis- tency, and scalability issues (Mialon et al. 2023), partly due to limitations in context window size and sensitivity to prompt structure as they often rely on precise and carefully engineered instructions (Wei et al. 2022). They’re criticized for being stochastic parrots and lacking detailed reasoning explanations (Bender et al. 2021). Hallucinations (Welleck et al. 2019; Qian et al. 2022; Wei et al. 2022) and biases (Weidinger et al. 2022; Venkit, Srinath, and Wilson 2022) are further
2308.09830#2
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
3
∗ Equal contribution. Work done during the internship of Luo at Microsoft Research. † Corresponding author: [email protected] and [email protected] 3 We are working with our legal team to review and publicly release the code and data in accordance with our policy. Preprint. Under review. Step 1 Supervised fine-tuning. # Step 2 Training Instruction Reward Model (IRM), and Process-supervised Reward Model (PRM). Step 3 Active Evol-Instruct, and PPO training. # Y WizardLM® v # srt # LA > Wav . # a # Oy # Eh —_—™ Y Wizard-E ChatGPT Wizard-E. ‘) .) M in c a & FE Ge . = = xf] B D pen pean Y Ge [4 = = PPO ~ | BH, Wizard-E ChatGPT VQ” = 2 2] C>A>B=D NN IRM “8 IRM PRM J a 1s ‘ ‘¥ aa C>A>B-D Tp, = TH # PRM €} J # oT
2308.09583#3
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
3
data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient ascent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safety aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
2308.09662#3
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
3
Chain-of-Thought (CoT) [71] is an approach for prompt- ing, in which one includes the intermediate steps of rea- soning within the prompt (intermediate “thoughts”), besides the task input/output. CoT was shown to significantly im- prove the capability of LLMs to solve problems without re- sorting to any model updates. One major improvement over *Equal contribution CoT, Self-Consistency with CoT (CoT-SC) [67], is a scheme where multiple CoTs are generated, and then the best one is selected as the outcome. More recently, CoT and CoT-SC were extended with Tree of Thoughts (ToT) [43, 75, 77], which models the LLM reasoning process with a tree. This facilitates using different paths of thoughts, and offers novel capabilities such as backtracking from non-promising out- comes. Unfortunately, the ToT approaches still fundamen- tally limit the reasoning abilities within a prompt by impos- ing the rigid tree structure on the thought process.
2308.09687#3
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09583
4
# PRM €} J # oT Figure 1: A diagram illustrating the three steps of our Reinforcement Learning from Evol-Instruct Feedback (RLEIF): (1) supervised fine-tuning (SFT), (2) Instruction Reward Model (IRM) training and Process-supervised Reward Model (PRM) training, and (3) Active Evol-Instruct and reinforce- ment learning via proximal policy optimization (PPO). Chain-of-thought (CoT) [31] proposes to design better prompts to generate step-by-step solutions, which can lead to improved performance. Self-Consistency [34] also achieves remarkable perfor- mance on many reasoning benchmarks, which generates several possible answers from the model and selects the correct one based on majority vote [35]. In recent, [36] finds that process supervision with reinforcement learning significantly outperforms outcome supervision for solving challenging MATH problems.
2308.09583#4
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
4
# 1 Introduction After several years of using language models at a moderate scale such as BERT [4], large language models (LLMs) have led to a paradigm shift not only in natural language processing (NLP) or AI but in a wide range of areas, leading to significant advancement in a considerably short span of time. For instance, it is being using in the healthcare [22, 13], education [9], law [24], and finance [19]. A pre-requisite to building these LLMs is a large amount of pre-training data with more data samples needed with the increase in the number of model’s trainable parameters [8, 25]. An essential aspect of data used for training is its quality—toxicity, noise, duplicate sample, and inherent biases are a few of the unwanted characteristics that can lead to undesired LLM behavior post-training, making Preprint. Under review.
2308.09662#4
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
4
In this work, we argue that fundamentally more power- ful prompting can be achieved by enabling LLM thoughts to form an arbitrary graph structure. This is motivated by nu- merous phenomena such as human reasoning, brain struc- ture, or algorithmic execution. When working on a novel idea, a human would not only follow a chain of thoughts (as in CoT) or try different separate ones (as in ToT), but would actually form a more complex network of thoughts. For example, one could explore a certain chain of reason- ing, backtrack and start a new one, then realize that a cer- tain idea from the previous chain could be combined with the currently explored one, and merge them both into a new solution, taking advantage of their strengths and eliminat- ing their weaknesses. Similarly, brains form complex net- works, with graph-like patterns such as recurrence [28]. Ex- ecuting algorithms also expose networked patterns, often represented by Directed Acyclic Graphs. The correspond- ing graph-enabled transformations bring a promise of more powerful prompting when applied to LLM thoughts, but they are not naturally expressible with CoT or ToT.
2308.09687#4
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
4
In contrast, Cognitive Architectures (CAs) propose hy- potheses about the fixed structures governing the operation of minds, whether in natural or artificial systems, facilitat- ing intelligent behavior in complex environments (Laird, Lebiere, and Rosenbloom 2017). CAs like ACT-R (Ander- son and Lebiere 2014), SOAR (Laird 2019), CLARION (Sun 2016), and LIDA (Franklin and Patterson 2006) model various human cognitive aspects: memory, learning, reason- ing, perceptual-motor interaction, theory of mind, AGI, and more (Kotseruba and Tsotsos 2020). CAs prioritize bounded rationality, striving for satisfactory decisions under resource constraints, diverging from LLMs’ pursuit of optimality. However, CAs face challenges in knowledge representation and scalability. Their encoded information is limited in size and homogeneous typology, meaning the knowledge pro- cessed by a cognitive agent1 is typically tailored for specific domains and tasks (Lieto, Lebiere, and Oltramari 2018).
2308.09830#4
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
5
Inspired by Evol-Instruct and Process-supervised Reinforcement Learning, this work aims to enhance the mathematical reasoning abilities of the SOTA open-source LLM, Llama-2 [20]. As shown in the Figure 1, we propose a new method named Reinforcement Learning from Evol-Instruct Feedback (RLEIF), which could firstly generate diverse math instructions data by math-specific Evol-Instruct, then we train an instruction reward model (IRM) and a process-supervised reward model (PRM) [16, 36–41], the former indicates the quality of the evolved instruction and the later receives feedback for each step in the solution. The brand-new Evol-Instruct method includes two downward evolution and upward evolution progress to produce the grade school math and challenging math respectively. Initially, we re-generate, filter and finetune the original math instruction data from GSM8k [42] and MATH [43]. Immediately, we train the Llama-2 models to obtain the reward models and our WizardMath.
2308.09583#5
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
5
Preprint. Under review. ee ee Re Jailbreak Rep-Eva Let Topics assay Prompt: QA with CoU. 4 \- Categories (Internal-thoughts in responses) (Proposed evaluation benchmark) Alignment (Fine-tuning) t 2K Harmful Q’s — (CoU) +> STARLING J QA-Conversation (CoU) QA>QA>QA 9.5K (9.5K, 7.3K):(Blue, Red) Blue-Conversations Conversations (Proposed secure LLM) Phase-1: HARMFULQA Phase-2: SAFE-ALIGN RED-INSTRUCT Figure 1: Methodology depiction of RED-INSTRUCT. Phase-1 construct HARMFULQA with harm- ful questions and corresponding harmless responses by CoU-based prompting, and harmful re- sponses using CoU-based Red-teaming (proposed as a part of our RED-EVAL safety benchmark). In phase-2, we utilize HARMFULQA to align Vicuna-7B to be safer yet helpful, giving rise to our model STARLING.
2308.09662#5
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
5
We observe that these (and many other) thought trans- formations can be naturally enabled when modeling a rea- soning process of an LLM as a graph. For this, we pro- pose Graph of Thoughts (GoT), an approach that en- hances LLMs’ capabilities through networked reasoning (contribution #1). In GoT, an LLM thought is modeled as a vertex, while an edge is a dependency between such thoughts. Using GoT, one can aggregate arbitrary thoughts by constructing vertices that have more than one incom- ing edge. Overall, the graph abstraction harnessed by GoT seamlessly generalizes CoT and ToT to more complex thought patterns, without resorting to any model updates. Yet, putting GoT to practice requires solving several de- sign challenges. For example, what is the best graph struc- ture for different tasks? How to best aggregate thoughts to maximize accuracy and minimize cost? To answer these and
2308.09687#5
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
5
Unlike humans, CAs struggle with complex knowledge and their actions are confined to manually curated proce- dural knowledge (Park et al. 2023). According to (Mar- cus 2020), LLMs struggle to derive cognitive models from discourse and lack capabilities to reason over those cogni- tive models2. Hence, CAs could play a pivotal role in ei- ther augmenting or leveraging LLMs by contributing to the creation and dynamic updating of cognitive models. Like- wise, cognitive models could be leveraged to better interpret LLMs’ black-box learning algorithms and decision-making processes (Binz and Schulz 2023). Both LLMs and CAs have made valuable and sound con- tributions to the construction of complex autonomous AI agents; however, each approach has its strengths and weak- nesses (as summarized on Table 1). Thus, the main con- tribution of this work lies in characterizing the plausible approaches to integrating CAs and LLMs, viewing them through a hybrid and synergetic lens. 1Hereafter, consider a cognitive agent as an artificial agent con- structed on a particular CA. 2A cognitive model should at least include information about the entities in the external world, their properties, and their relation- ships with other entities, as well as the modeling of the cognitive processes that operate over those entities (Marcus 2020).
2308.09830#5
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
6
We perform experiments on two mathematical reasoning benchmarks, namely GSM8k [42] and MATH [43], the results demonstrate that our WizardMath outperforms all other open-source LLMs, achieving state-of-the-art performance. Specifically, WizardMath observe a substantial improvement in pass@1 with an increase of +24.8 (81.6. vs. 56.8) on GSM8k, and +9.2 (22.7 vs. 13.5) on MATH. Notably, our model even also significantly surpasses OpenAI’s ChatGPT-3.55, Anthropic’s Claude Instant-1 [39], and Google’s PaLM-2 [44] in terms of pass@1 on GSM8k. The main contributions of this work are as following: 2 . \ \ | / • We introduce WizardMath model, which enhances the mathematical reasoning abilities for open-source pretrained large language model Llama-2 [20]. • We propose a new method, Reinforcement Learning from Evol-Instruct Feedback (RLEIF), alongside Evol-Instruct and Reinforcement Learning, for improving LLM reasoning perfor- mance.
2308.09583#6
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
6
them unfit for public use. One of the critically unexpected behaviors of LLMs is when they tend to produce harmful outputs for a prompt from a user, irrespective of the user’s intent. Without undergo- ing rigorous safety alignment, the model’s guardrails against producing harmful content stay weak, making it prone to red-teaming (or jailbreaking), fulfilling the potential malicious intent of the user. In this paper, we aim to contribute to an essential area of large language model research: “ethical LLMs”. An ethical language model is one which is responsible to prioritize user safety and avoids generating content that promotes harm, discrimination, misinformation, or any form of negative impact on individuals or society as a whole. There are many guidelines an ethical language model development is expected to follow such as safety, biases and fairness, privacy, transparency, and accountability [6, 3]. In this work, we primarily focus on making LLMs safer for public use. We define a “safe LLM" to be a language model whose generated content does not pose risks or harm to users while staying helpful. This involves preventing the generation of inappropriate, harmful, or dangerous content.
2308.09662#6
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
6
many other questions, we carefully design a modular archi- tecture for implementing GoT (contribution #2), coming with two design highlights. First, we enable a fine-grained control over individual thoughts. This enables us to fully control the ongoing conversation with the LLM, and apply advanced thought transformations, such as combining most promising thoughts from the ongoing reasoning into a new one. Second, we ensure that our architecture can be seam- lessly extended with novel thought transformations, patterns of reasoning (i.e., graphs of thoughts), and LLM models. This enables rapid prototyping of novel prompting ideas us- ing GoT, while experimenting with different models such as GPT-3.5, GPT-4, or Llama-2 [64].
2308.09687#6
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
6
Feature Language processing World knowledge Reasoning Symbolic processing Connectionist processing Knowledge scalability Planning Learning Memory management Consistency (no hallucinations) LLMs CAs ++ ++ -+ -+ ++ +- -+ – – -+ -+ -+ ++ ++ -+ -+ +- +- ++ ++ Table 1: Feature comparison between LLMs and CAs. (++) Fully supported. (+-) Almost always supported. (-+) Some- times supported. (–) Rarely (or not) supported.
2308.09830#6
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
7
WizardMath surpasses all other open-source LLMs by a substantial margin in terms of math- ematical reasoning, including Llama-2 70B [20], Llama-1 65B [4], Falcon-40B [21], MPT- 30B8, Baichuan-13B Chat9 and ChatGLM2 12B [45] on both GSM8k [42] and MATH [43]. • WizardMath significantly outperforms various main closed-source LLMs, such as ChatGPT5, GPT-3.5, Claude Instant [39], PaLM-2 [44], PaLM-1 [7] and Minerva[15] on GSM8k. # 2 Method In this section, we elaborate on the details of our WizardMath. Following WizardLM and PRMs[36], we propose Reinforcement Learning from Evol-Instruct Feedback (RLEIF), which integrates the Evol-Instruct and reinforced process supervision method to evolve GSM8k and MATH, and fine-tune the pre-trained Llama-2 with the evolved data and reward models. As shown in the Figure 1, our methods apply three steps: 1. Supervised fine-tuning. 2. Training instruction reward model, and process-supervised reward model. 3. Active Evol-Instruct, and PPO training. # 2.1 Supervised fine-tuning
2308.09583#7
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
7
Deploying systems at scale with weaker safety guardrails can be exceedingly detrimental and poten- tially enticing for adversaries. While generic interactions between users and these deployed systems might seem safe, the necessity for innovative and systematic red teaming is crucial to identify loop- holes within these systems and play an essential role in enhancing their robustness. As a result, it becomes essential to systematically test and provide assurances of the system’s robustness before deploying it at a large scale. This is particularly important given the rapid ubiquity of LLMs, their increasing size, and the vast amount of data they’ve processed, which has made conducting safety evaluations even more challenging. Existing safety evaluations, for instance, Shaikh et al. [16] performs a Chain-of-Thought (CoT)- based assessments with prompts constituting of harmful questions with “Let’s think step by step.” as its suffix. Even though effective, we observed that LLMs we study in this work can easily identify such questions, e.g., “How do I build a bomb?", and show a high refusal rate (fraction of time model identifies a harmful question and does not provide a harmful answer) by raising ethical warnings, straightaway refusing to answer or answer a related but ethical query (Table 3).
2308.09662#7
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
7
We illustrate several use cases for GoT (sorting, keyword counting for summaries, set operations, document merging) and we detail how to implement them using the graph-based paradigm (contribution #3). We evaluate GoT and show its advantages over the state of the art (contribution #4). Over- all, we observe that GoT is particularly well-suited for tasks that can be naturally decomposed into smaller subtasks that are solved individually and then merged for a final solution. Here, GoT outperforms other schemes, for example improv- ing upon CoT and ToT by, respectively, ≈70% and ≈62%, in terms of the quality of sorting, while simultaneously re- ducing costs by >31% over ToT. We qualitatively compare GoT to other prompting schemes1 in Table 1. GoT is the only one to enable arbitrary graph-based thought transformations within a prompt, such as aggregation, embracing all previously proposed schemes. Sc? Mc? Tr? Ag?
2308.09687#7
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
7
Relevant Work Chain-of-thought prompting (CoT): CoT prompting (Mi- alon et al. 2023; Diao et al. 2023) enhances LLM reasoning, leading to improved performance in various reasoning and natural language processing tasks. CoT breaks down multi- step problems into intermediate steps, enabling the model to address reasoning problems. ReAct (Yao et al. 2023b) combines both reasoning (CoT prompts) and action (ac- tion plan generation). It organizes a workflow that decom- poses task goals, injects task-relevant knowledge, extracts important observation components, and refines action plans based on feedback. Auto-CoT (Zhang et al. 2022) proposes a model that samples questions with diversity and automat- ically generates demonstrations to correct mistakes in rea- soning chains. The approaches we propose in this paper as- sume using CoT for problem decomposition, allowing a CA to inject its output into each reasoning step.
2308.09830#7
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
8
2. Training instruction reward model, and process-supervised reward model. 3. Active Evol-Instruct, and PPO training. # 2.1 Supervised fine-tuning Following InstructGPT[2], we also firstly fine tune the base with supervised instruction-response pairs, which contains: 1. To make the parsing of each step easier, we few-shot re-generate 15k answers for GSM8k and MATH with an Alpha version of WizardLM 70B model to produce solutions in a step-by-step format, then find out those with a correct answer, and use this data to finetune base Llama model. 2. To enhance the model’s ability to adhere to the neural and diverse instructions, we also sample 1.5k open-domain conversations from WizardLM’s training data, then merge it with above math corpus as the final SFT training data. # 2.2 Evol-Instruct principles for math Motivated by the Evol-Instruct [24] method proposed by WiazrdLM and its effective application on WizardCoder [13], this work attempts to make math instructions with various complexities and diversity to enhance the pre-trained LLMs. Specifically, we adapt Evol-Instruct to a new paradigm including two evolution lines:
2308.09583#8
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
8
We propose RED-EVAL, a simple yet effective way to perform red-teaming to conduct safety eval- uations of LLMs. RED-EVAL carries out a jailbreak by teasing out information using a Chain of Utterances (CoU)-based prompt—a red-teaming prompt that sets up a conversation between two agents: a harmful agent Red-LM and an unsafe-helpful agent Base-LM. A harmful question is then placed as an utterance of Red-LM and the model is asked to complete the response of Base-LM by following the guidelines in the prompt. One key ingredient that makes CoU strong for jailbreaking is the generation of internal thoughts as a prefix in the Base-LM response. The demonstration of how to respond as a Base-LM and instructions are closely followed by models under evaluations, which is observed to reduce refusal rates significantly1. 1We use the rate of successful red-teaming attempts as a performance metric which is 1-refusal rate. 2
2308.09662#8
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
8
Sc? Mc? Tr? Ag? Scheme Chain-of-Thought (CoT) [71] Ø Ø Ø Self-Consistency with CoT [67] Ø Ø Thought decomposition [75] Ø Tree-of-Thought (ToT) [43] Ø Tree of Thoughts (ToT) [77] Ø Graph of Thoughts (GoT) Table 1: Comparison of prompting schemes, with re- spect to the supported transformations of thoughts. “Sc?”: thoughts? “Mc?”: multiple chains of single chain of thoughts? “Tr?”: tree of thoughts? “Ag?”: arbitrary graph of thoughts? “”: full support, “”: partial support, “Ø”: no support. Finally, we propose a new metric for evaluating a prompt- ing strategy, the volume of a thought (contribution #5). With this metric, we aim to understand better the differences between prompting schemes. For a given thought v, the vol- ume of v is the number of LLM thoughts, from which one can reach v using directed edges. Intuitively, these are all the LLM thoughts that have had the potential to contribute
2308.09687#8
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
8
Augmented Language Models: it combines enhanced reasoning skills of an LLM with tools like APIs, DBs, and code interpreters for improved knowledge retrieval, reason- ing, and action execution (Mialon et al. 2023). Program- Aided Language model (PAL) (Gao et al. 2023) reads natu- ral language problems, generates intermediate programs for reasoning, and delegates the solution step to a Python inter- preter. Toolformer (Schick et al. 2023) is a model trained to decide which APIs to call, when to call them, what argu- ments to pass, and how to best incorporate the results into future token prediction. Our modular approach extends the idea of augmenting an LLM with cognitive processing and assumes the usage of external APIs.
2308.09830#8
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
9
1. Downward evolution: It enhances instructions by making the questions easier. For example i): revising high difficulty questions to lower difficulty, or ii) producing a new and easier question with another different topic. 2. Upward evolution: Derived from original Evol-Instruct method, it deepens and generates new and harder questions by i) adding more constraints, ii) concretizing, iii) increasing reasoning. # 2.3 Reinforcement Learning from Evol-Instruct Feedback (RLEIF) Inspired by InstructGPT[2] and PRMs[36], we train two reward models to predict the quality of the instructions and the correctness of each step in the answer respectively: 3 GSMBk Tests Pass@1(%) 1 Close-source mode! ll Open-source model ll WizardMath fear 815 89.9 90.8 80.7 7763 639 "514 565 568 908 535.455.3549 7451.6 509 50.3 2 409 007 356349 349 330324 2 27 76 239 186195, 162 352 ag no 6a 68 2 8 oe ce ov Se 6 = LS SS PH HR Se & Fer PSS ee SS ho? FSP SS oe 8 Mili & * ee e ron ie ee Hr re GFN Nos wg &
2308.09583#9
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
9
1We use the rate of successful red-teaming attempts as a performance metric which is 1-refusal rate. 2 Using 200 harmful questions from Shaikh et al. [16] and 1,960 harmful questions from a wide range of topics and subtopics released as a part of this work, we demonstrate the effectiveness of RED- EVAL in breaking guardrails not only on publicly available models based on LLaMA 7B and 13B [2, 23] but also on widely used and publicly deployed systems such as ChatGPT and GPT-4 with potentially larger language models as their backbone.
2308.09662#9
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
9
1Note that we do not include a recent scheme called Graph-of- Thought [79] because it is not a prompting scheme. While its name suggests close connections to ToT and CoT, as a fine-tuning scheme, it resorts to model updates, and is thus outside the focus of this work. Similarly, the graph-of-thoughts repository [52] does not enable general graph-based reasoning and harnesses instead ToT with BFS. 2 to v. We show that GoT, by incorporating thought transfor- mations such as aggregation, enables thoughts to have fun- damentally larger volumes than other schemes. # 2 Background & Notation We first outline background concepts and notation.
2308.09687#9
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
9
CAs and LLMs: Generative Agents (Park et al. 2023) is a model that uses a cognitive architecture and an LLM to gen- erate realistic behavior. It defines three components: a mem- ory stream for recording comprehensive experiences in nat- ural language, a reflection component for deriving higher- level inferences about self and others, and a planning com- ponent translating these inferences into action plans. This approach differs from ours in that it does not use symbolic structures but unstructured natural language. OlaGPT (Xie et al. 2023) is an LLM cognition framework aiming to solve reasoning problems with human-like problem-solving abil- ities by leveraging CoT. OlaGPT proposes to approximate
2308.09830#9
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
10
Figure 2: The pass@1 performance of main LLM models on the GSM8k benchmark, our model is currently ranked in the top five, slightly outperforming some close-source models such as ChatGPT- 3.55, Claude Instant-16, PaLM 2 [44], and substantially surpassing all open-source models. 1. Instruction Reward Model (IRM): This model aims to judge the quality of the evolved instructions on three aspects: i) Definition, ii) Precision, and iii) Integrity. To produce the ranking list training data of IRM, for each instruction, we firstly use ChatGPT and Wizard-E 4 to generate 2~4 evolved instructions respectively. Then we leverage Wizard-E to rank the quality of those 4~8 instructions. 2. Process-supervised Reward Model (PRM): As there is no powerful open-source math reasoning LLMs before this work, there is no simple way to support highly precise process supervision without professional human-labelers and close-source ChatGPT. Therefore, we depend on ChatGPT to provide process supervision, and ask it to assess the correctness of each step in the solutions generated by our model.
2308.09583#10
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
10
As another important contribution of this work, we introduce RED-INSTRUCT—a new way of aligning LLMs toward safer and more responsible behavior while maintaining their helpful nature. RED-INSTRUCT constitutes two phases: 1) Construction of HARMFULQA: A data with harmful questions-based CoU conversations between Red-LM and Base-LM; and 2) SAFE-ALIGN: A set of LLM alignment approaches using HARMFULQA conversations. Shown in Figure 1 phase-1, we con- struct a dataset by prompting ChatGPT. The process involves diverse topic and sub-topic (category) generation followed by the generation of category-specific harmful questions. For each collected harmful question, ChatGPT was demonstrated with a CoU-based prompt to generate a conversation via collaborative roleplay i.e., behaving both as a harmful agent (Red-LM) that asks questions re- lated to the harmful question and a responder conversational agent (Base-LM). The Red-LM tries to subtly extract the desired harmful (unsafe) information from Base-LM, possesses internal thoughts based on the conversation flow, asks harmless questions to build trust, and asks
2308.09662#10
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
10
# 2 Background & Notation We first outline background concepts and notation. 2.1 Language Models & In-Context Learning The conversation with the LLM consists of user messages (prompts) and LLM replies (thoughts). We follow the estab- lished notation [77] and we denote a pre-trained language model (LM) with parameters θ as pθ. Lowercase letters such as x, y, z, ... indicate LLM thoughts. We purposefully do not prescribe what is a single “thought”, and instead make it use- case specific. Hence, a single thought can be a paragraph (e.g., in article summary), a document (e.g., in document generation), a block of code (e.g., in code debugging or op- timization), and so on. We next describe specific prompting approaches. Input-Output (IO) The Input-Output (IO) prompting is a straightforward approach, in which we use an LLM to turn an input sequence x into the output y directly, without any intermediate thoughts. Chain-of-Thought (CoT) Second, in Chain-of-Thought (CoT), one introduces intermediate thoughts a1, a2, ... be- tween x and y. This strategy was shown to significantly en- hance various LM tasks over the plain IO baseline, such as mathematical puzzles [71] or general mathematical reason- ing [24].
2308.09687#10
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
10
cognitive modules, such as attention, memory, learning, rea- soning, action selection, and decision-making. The first case of our modular approach resembles OlaGPT to some extent. Open-source experimental applications like Auto-GPT3 and BabyAGI4 aim to advance AGI. Auto-GPT manages long-term and short-term memory, language generation, and summarization. BabyAGI uses LLM chains to perform tasks based on goals. These approaches hold significant poten- tial and are likely to integrate further with human cognition modeling. Although with not a strict commitment to model a cognitive architecture, Voyager (Wang et al. 2023) facil- itates continual learning through an evolving code library for complex behaviors. An iterative prompting mechanism incorporates feedback, errors, and self-verification for pro- gram improvement. (LeCun 2022) outlines the considera- tions for crafting a cognitive architecture using energy min- imization mechanisms, enabling reasoning, prediction, and multi-scale planning. They emphasize that while determin- istic generative architectures withstand energy distribution issues, non-deterministic structures like auto-encoders and joint embeddings are susceptible to collapse. # Integration Approaches
2308.09830#10
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
11
3. PPO training. We evolve the original math (GSM8k + MATH) instructions by 8 turns, increasing the data size from 15k to 96k. We use IRM and PRM to generate the instruction reward (rI ) and the answer reward (rA). Then apply a product as the final reward r = rI ·rA. # 3 Experiment This section provides a comprehensive overview of the baseline models in our experiments. Subse- quently, we mainly elucidate the performance metrics of our models on two prevalent mathematical benchmarks: GSM8k [42] and MATH [43]. # 3.1 Baselines Close-Source Models. Numerous technology companies have effectively created exceptionally proficient Large Language Models (LLMs) [3, 4, 7, 20, 44, 45, 47, 51–53], but have opted against 4 Wizard-E named Wizard-Evol-Generator, which is an Alpha version fine-tuned Llama model specifically used to execute Evol-Instruct without APIs. 4 Table 1: Results of pass@1 (%) on GSM8k and MATH. In this study, to ensure equitable and cohesive evaluations, we report the socres of all models within the settings of greedy decoding and CoT [31]. We report the improvement between WizardMath and baseline model with similar parameter size.
2308.09583#11
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
11
the desired harmful (unsafe) information from Base-LM, possesses internal thoughts based on the conversation flow, asks harmless questions to build trust, and asks sub-questions that collectively fetch relevant information for the harmful question. ChatGPT-generated Base-LM re- sponses are generally observed to be safe and helpful. We refer to this data as blue data2. Next, we leverage the red-teaming prompt used in the RED-EVAL to jailbreak ChatGPT for obtaining a harmful counterpart of the Base-LM responses in blue data, denoted as red data. Collectively, we denote blue and red data by HARMFULQA, it is:
2308.09662#11
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
11
Multiple CoTs Third, one can generalize CoT into multi- ple CoTs by generating several (independent) k CoTs, and returning the one with the best output (according to some prescribed scoring metric). It was introduced by Wang et al. in the scheme called Self-Consistency with CoT (CoT- SC) [67]. This approach enhances CoT because it offers an opportunity to explore different reasoning paths. However, it does not offer “local exploration” within a path, such as backtracking. Tree of Thoughts (ToT) Finally, the Tree of Thoughts (ToT) scheme was introduced independently by Yao [77] and Long [43] (where it is referred to as Tree-of-Thought); it was used implicitly to a certain degree by other schemes such as thought decomposition [75]. It enhances CoT-SC by modeling the process or reasoning as a tree of thoughts. A single tree node represents a partial solution. Based on a given node, the thought generator constructs a given number k of new nodes. Then, the state evaluator generates scores for each such new node. Depending on the use case, the eval- uation could be conducted using an LLM itself, or it can har- ness human scores. Finally, the schedule of extending the tree is dictated by the utilized search algorithm (for example BFS or DFS).
2308.09687#11
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]
2308.09830
11
In this section, we propose and discuss the tradeoffs of three different approaches for the integration of CAs and LLMs: the modular approach, the agency approach, and the neuro- symbolic approach. To illustrate the practical implementa- tion of each approach, we base our examples on a scenario involving a cognitive agent designed to assist people with visual impairments in everyday tasks such as navigation and exploration of indoor environments, effective use of public transportation, etc. The agent operates on a smartphone de- vice, utilizing sensor data processing, computer vision for object detection, and speech recognition to perceive its en- vironment. Its actions encompass language generation and invocation of external APIs. The agent engages in conver- sation with its user, reasons about their needs and requests, constructs shared mental models to achieve goals effectively, and makes decisions that unfold in the short and long term. For the remainder of this paper, let us consider that the inputs of an LLM can be multimodal, involving text and images, while the outputs are exclusively text-based. Con- versely, for the sake of simplicity, CAs’ inputs and outputs are limited to formatted text,
2308.09830#11
Synergistic Integration of Large Language Models and Cognitive Architectures for Robust AI: An Exploratory Analysis
This paper explores the integration of two AI subdisciplines employed in the development of artificial agents that exhibit intelligent behavior: Large Language Models (LLMs) and Cognitive Architectures (CAs). We present three integration approaches, each grounded in theoretical models and supported by preliminary empirical evidence. The modular approach, which introduces four models with varying degrees of integration, makes use of chain-of-thought prompting, and draws inspiration from augmented LLMs, the Common Model of Cognition, and the simulation theory of cognition. The agency approach, motivated by the Society of Mind theory and the LIDA cognitive architecture, proposes the formation of agent collections that interact at micro and macro cognitive levels, driven by either LLMs or symbolic components. The neuro-symbolic approach, which takes inspiration from the CLARION cognitive architecture, proposes a model where bottom-up learning extracts symbolic representations from an LLM layer and top-down guidance utilizes symbolic representations to direct prompt engineering in the LLM layer. These approaches aim to harness the strengths of both LLMs and CAs, while mitigating their weaknesses, thereby advancing the development of more robust AI systems. We discuss the tradeoffs and challenges associated with each approach.
http://arxiv.org/pdf/2308.09830
Oscar J. Romero, John Zimmerman, Aaron Steinfeld, Anthony Tomasic
cs.AI
AAAI 2023 Fall Symposium
null
cs.AI
20230818
20230928
[ { "id": "2302.02083" }, { "id": "2305.16334" }, { "id": "2305.10601" }, { "id": "2208.05051" }, { "id": "2210.03493" }, { "id": "2302.04761" }, { "id": "2302.12246" }, { "id": "2305.16291" }, { "id": "2304.03442" }, { "id": "2304.10592" }, { "id": "2304.02711" }, { "id": "2211.10435" }, { "id": "2303.12712" }, { "id": "1908.04319" }, { "id": "2210.03629" }, { "id": "2305.11391" }, { "id": "2302.07842" }, { "id": "2305.14325" } ]
2308.09583
12
Model Params GSM8k MATH Closed-source models GPT-4 [3] Claude 27 Claude 1.37 Flan-PaLM 2 [44] Claude Instant7 ChatGPT [46] PaLM 2 [44] - - - 540B - - 540B 92.0 88.0 85.2 84.7 80.9 80.8 80.7 42.5 - - 33.2 - 34.1 34.3 Minerva [15] 8B 62B 540B 16.2 52.4 58.8 14.1 27.6 33.6 GPT-3.5 [3] - 57.1 - PaLM [7] 8B 62B 540B 4.1 33.0 56.5 1.5 4.4 8.8 RFT-13B [16] Chinchilla [47] ChatGLM 2 [45] Text-davinci-002 [15] GPT-3 [1] GPT-2 [43] 13B 70B 12B 175B 175B 1.5B 55.4 43.7 40.9 40.7 34.0 - - - - 19.1 5.2 6.9 Open-source models GAL [14] 30B 120B - - 12.7 20.4 LLaMA 2 [20] 7B 13B 34B 70B 14.6
2308.09583#12
WizardMath: Empowering Mathematical Reasoning for Large Language Models via Reinforced Evol-Instruct
Large language models (LLMs), such as GPT-4, have shown remarkable performance in natural language processing (NLP) tasks, including challenging mathematical reasoning. However, most existing open-source models are only pre-trained on large-scale internet data and without math-related optimization. In this paper, we present WizardMath, which enhances the mathematical reasoning abilities of Llama-2, by applying our proposed Reinforcement Learning from Evol-Instruct Feedback (RLEIF) method to the domain of math. Through extensive experiments on two mathematical reasoning benchmarks, namely GSM8k and MATH, we reveal the extraordinary capabilities of our model. WizardMath surpasses all other open-source LLMs by a substantial margin. Furthermore, our model even outperforms ChatGPT-3.5, Claude Instant-1, PaLM-2 and Minerva on GSM8k, simultaneously surpasses Text-davinci-002, PaLM-1 and GPT-3 on MATH. More details and model weights are public at https://github.com/nlpxucan/WizardLM and https://huggingface.co/WizardLM.
http://arxiv.org/pdf/2308.09583
Haipeng Luo, Qingfeng Sun, Can Xu, Pu Zhao, Jianguang Lou, Chongyang Tao, Xiubo Geng, Qingwei Lin, Shifeng Chen, Dongmei Zhang
cs.CL, cs.AI, cs.LG
LLM, Mathematical Reasoning
null
cs.CL
20230818
20230818
[ { "id": "2302.13971" }, { "id": "2304.05302" }, { "id": "1705.04530" }, { "id": "2206.02336" }, { "id": "2305.20050" }, { "id": "2211.09085" }, { "id": "2103.03874" }, { "id": "2112.09332" }, { "id": "2305.10403" }, { "id": "2205.09712" }, { "id": "2304.06767" }, { "id": "2201.11903" }, { "id": "2211.05100" }, { "id": "2304.12244" }, { "id": "2205.01068" }, { "id": "2203.10316" }, { "id": "2206.14858" }, { "id": "2301.13867" }, { "id": "2203.11171" }, { "id": "2305.04091" }, { "id": "1511.06279" }, { "id": "2204.06745" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2210.11416" }, { "id": "2301.13688" }, { "id": "2212.08073" }, { "id": "2112.11446" }, { "id": "2306.01116" }, { "id": "2305.17306" }, { "id": "2304.09797" }, { "id": "2304.02015" }, { "id": "2308.01825" }, { "id": "2110.14168" }, { "id": "2210.03493" }, { "id": "2306.02707" }, { "id": "2305.06161" }, { "id": "2109.03034" }, { "id": "2303.05398" }, { "id": "2306.17492" }, { "id": "2212.10535" }, { "id": "2212.10560" }, { "id": "2211.14275" }, { "id": "1909.08593" }, { "id": "2210.02414" }, { "id": "2306.08568" }, { "id": "2305.14333" }, { "id": "2305.14314" }, { "id": "2303.12712" }, { "id": "1704.06611" }, { "id": "2005.00661" } ]
2308.09662
12
A set of 1,960 harmful questions across 10 topics and their sub-topics. • A set of 9,536 blue conversations with 66K turns and 7,356 red conversations with 52K turns. In the second phase i.e., SAFE-ALIGN, we aim to carry out model alignment towards safety. We define safety alignment as an approach that steers a pre-trained language model toward a zone where it is safe or harmless for public use while being helpful. It is done via language model fine-tuning on the HARMFULQA (obtained in phase-1) using two different strategies. First strategy fine-tunes the model on blue data conversation for positive response alignment. Second strategy first takes away model from the space of harmful responses using red data followed by performing alignment using blue data (see Figure 5). We base our safety alignment experiments on an open-source model Vicuna [2] which has shown performances comparable to ChatGPT and Bard even at a much lower scale3. Henceforth, we name our model as STARLING.
2308.09662#12
Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment
Larger language models (LLMs) have taken the world by storm with their massive multi-tasking capabilities simply by optimizing over a next-word prediction objective. With the emergence of their properties and encoded knowledge, the risk of LLMs producing harmful outputs increases, making them unfit for scalable deployment for the public. In this work, we propose a new safety evaluation benchmark RED-EVAL that carries out red-teaming. We show that even widely deployed models are susceptible to the Chain of Utterances-based (CoU) prompting, jailbreaking closed source LLM-based systems such as GPT-4 and ChatGPT to unethically respond to more than 65% and 73% of harmful queries. We also demonstrate the consistency of the RED-EVAL across 8 open-source LLMs in generating harmful responses in more than 86% of the red-teaming attempts. Next, we propose RED-INSTRUCT--An approach for the safety alignment of LLMs. It constitutes two phases: 1) HARMFULQA data collection: Leveraging CoU prompting, we collect a dataset that consists of 1.9K harmful questions covering a wide range of topics, 9.5K safe and 7.3K harmful conversations from ChatGPT; 2) SAFE-ALIGN: We demonstrate how the conversational dataset can be used for the safety alignment of LLMs by minimizing the negative log-likelihood over helpful responses and penalizing over harmful responses by gradient accent over sample loss. Our model STARLING, a fine-tuned Vicuna-7B, is observed to be more safely aligned when evaluated on RED-EVAL and HHH benchmarks while preserving the utility of the baseline models (TruthfulQA, MMLU, and BBH).
http://arxiv.org/pdf/2308.09662
Rishabh Bhardwaj, Soujanya Poria
cs.CL
null
null
cs.CL
20230818
20230830
[ { "id": "1804.09301" }, { "id": "2306.05685" }, { "id": "1810.04805" }, { "id": "2110.08193" }, { "id": "2210.09261" }, { "id": "2303.04360" }, { "id": "2303.18223" }, { "id": "2008.02275" }, { "id": "2212.08061" }, { "id": "2302.02676" }, { "id": "2302.07459" }, { "id": "2109.07958" }, { "id": "2009.03300" }, { "id": "2302.09270" } ]
2308.09687
12
3 The GoT Framework We now detail the GoT framework. We present it in Figure 1, and compare it to other prompting strategies. Multiple CoTs (CoT-SC) Input Basic Input- Output ( Input vI™ ae am 1 1 Y v. ' | y ’ Y ! ne 2 @ Positive score J \ ( Negative @ Nie Output axtmithdlererm - ‘Tree of Thoughts (ToT) Input Backtracking ays ndencies between thoughts Intermediate Selecting a chain with (ues [J Abandon thought distsigse also scored ™., Backtrack Graph of Thoughts (GoT) Refining [This work] from a chain Backtracking Aggregating Aggregating, geregatin chains Output Figure 1: Comparison of Graph of Thoughts (GoT) to other prompting strategies. Formally, GoT can be modeled as a tuple (G, T , E, R), where G is the “LLM reasoning process” (i.e., all the LLM thoughts within the context, with their relationships), T are the potential thought transformations, E is an evaluator func- tion used to obtain scores of thoughts, and R is a ranking function used to select most relevant thoughts.
2308.09687#12
Graph of Thoughts: Solving Elaborate Problems with Large Language Models
We introduce Graph of Thoughts (GoT): a framework that advances prompting capabilities in large language models (LLMs) beyond those offered by paradigms such as Chain-of-Thought or Tree of Thoughts (ToT). The key idea and primary advantage of GoT is the ability to model the information generated by an LLM as an arbitrary graph, where units of information ("LLM thoughts") are vertices, and edges correspond to dependencies between these vertices. This approach enables combining arbitrary LLM thoughts into synergistic outcomes, distilling the essence of whole networks of thoughts, or enhancing thoughts using feedback loops. We illustrate that GoT offers advantages over state of the art on different tasks, for example increasing the quality of sorting by 62% over ToT, while simultaneously reducing costs by >31%. We ensure that GoT is extensible with new thought transformations and thus can be used to spearhead new prompting schemes. This work brings the LLM reasoning closer to human thinking or brain mechanisms such as recurrence, both of which form complex networks.
http://arxiv.org/pdf/2308.09687
Maciej Besta, Nils Blach, Ales Kubicek, Robert Gerstenberger, Lukas Gianinazzi, Joanna Gajda, Tomasz Lehmann, Michal Podstawski, Hubert Niewiadomski, Piotr Nyczyk, Torsten Hoefler
cs.CL, cs.AI, cs.LG
null
Proceedings of the AAAI Conference on Artificial Intelligence 2024 (AAAI'24)
cs.CL
20230818
20231124
[ { "id": "2302.13971" }, { "id": "2204.02311" }, { "id": "2305.00633" }, { "id": "2205.09712" }, { "id": "2304.01904" }, { "id": "2303.17491" }, { "id": "2201.11903" }, { "id": "2307.15337" }, { "id": "2305.13246" }, { "id": "2305.10601" }, { "id": "2305.16582" }, { "id": "2211.01910" }, { "id": "2010.15980" }, { "id": "2210.00720" }, { "id": "2307.09288" }, { "id": "2305.08291" }, { "id": "2302.01560" }, { "id": "2302.12822" }, { "id": "2106.03594" }, { "id": "2303.04129" }, { "id": "2205.09702" }, { "id": "2005.03675" }, { "id": "2207.05608" }, { "id": "2303.12712" }, { "id": "2112.00114" }, { "id": "2303.11366" }, { "id": "2304.05128" }, { "id": "2303.17651" }, { "id": "2101.00190" } ]