doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.02477 | 83 | Gabriel Ilharco, Rowan Zellers, Ali Farhadi, and Hannaneh Hajishirzi. 2021. Probing contextual language models for common ground with visual representations. In Proceedings of the 2021 Con- ference of the North American Chapter of the
Association for Computational Linguistics: Hu- man Language Technologies, pages 5367â5377, Online. Association for Computational Linguis- tics.
Charles Jin and Martin Rinard. 2023. Evidence of meaning in language models trained on pro- grams. ArXiv preprint, abs/2305.11169.
and Zachary Chase Lipton. 2020. Learning the difference that makes A difference with counterfactually-augmented data. In 8th Interna- tional Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net.
Divyansh Kaushik, Amrith Setlur, Eduard H. Hovy, and Zachary Chase Lipton. 2021. Explaining the efficacy of counterfactually augmented data. In 9th International Conference on Learning Rep- resentations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net. | 2307.02477#83 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 84 | [17]
[18] Z. Chen, H. Mao, H. Li, W. Jin, H. Wen, X. Wei, S. Wang, D. Yin, W. Fan, H. Liu et al., âExploring the potential of large language models (llms) in learning on graphs,â arXiv preprint arXiv:2307.03393, 2023.
[19] W. X. Zhao, K. Zhou, J. Li, T. Tang, X. Wang, Y. Hou, Y. Min, B. Zhang, J. Zhang, Z. Dong et al., âA survey of large language models,â arXiv preprint arXiv:2303.18223, 2023. J. Zhang, R. Xie, Y. Hou, W. X. Zhao, L. Lin, and J.-R. Wen, âRecommendation as instruction following: A large language model empowered recommendation approach,â arXiv preprint arXiv:2305.07001, 2023.
[21] P. Liu, L. Zhang, and J. A. Gulla, âPre-train, prompt and recommendation: A comprehensive survey of language modelling paradigm adaptations in recommender systems,â arXiv preprint arXiv:2302.03735, 2023. | 2307.02046#84 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 84 | Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2023. Large language models are zero-shot reasoners.
Kazushi Kondo, Saku Sugawara, and Akiko Aizawa. 2023. Probing physical reasoning with counter-commonsense context. ArXiv preprint, abs/2306.02258.
Tiffany H. Kung, Morgan Cheatham, Arielle Mede- nilla, Czarina Sillos, Lorie De Leon, Camille Elepaño, Maria Madriaga, Rimel Aggabao, Giezel Diaz-Candido, James Maningo, and Vic- tor Tseng. 2023. Performance of ChatGPT on USMLE: Potential for AI-assisted medical edu- cation using large language models. PLOS Digi- tal Health, 2(2):1â12.
Matt J. Kusner, Joshua R. Loftus, Chris Russell, and Ricardo Silva. 2017. Counterfactual fair- ness. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neu- ral Information Processing Systems 2017, De- cember 4-9, 2017, Long Beach, CA, USA, pages 4066â4076. | 2307.02477#84 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 85 | [22] L. Wu, Z. Zheng, Z. Qiu, H. Wang, H. Gu, T. Shen, C. Qin, C. Zhu, H. Zhu, Q. Liu et al., âA survey on large language models for recommendation,â arXiv preprint arXiv:2305.19860, 2023. J. Lin, X. Dai, Y. Xi, W. Liu, B. Chen, X. Li, C. Zhu, H. Guo, Y. Yu, R. Tang et al., âHow can recommender systems benefit from large language models: A survey,â arXiv preprint arXiv:2306.05817, 2023. J. Wu, W. Fan, J. Chen, S. Liu, Q. Li, and K. Tang, âDisentangled contrastive learning for social recommendation,â in Proceedings of
14
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 4570â4574. | 2307.02046#85 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 85 | Emre Kıcıman, Robert Ness, Amit Sharma, and Chenhao Tan. 2023. Causal reasoning and large language models: Opening a new frontier for causality.
David A. Lagnado, Tobias Gerstenberg, and Roâi Zultan. 2013. Causal responsibility and counter- factuals. Cognitive Science, 37:1036 â 1073.
Brenden M. Lake, Tomer D. Ullman, Joshua B. Tenenbaum, and Samuel J. Gershman. 2017. Building machines that learn and think like peo- ple. Behavioral and Brain Sciences, 40.
Andrew Kyle Lampinen. 2023. Can language mod- els handle recursively nested grammatical struc- tures? A case study on comparing models and humans.
Karim Lasri, Tiago Pimentel, Alessandro Lenci, Thierry Poibeau, and Ryan Cotterell. 2022. Probing for the usage of grammatical number. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 8818â8831, Dublin, Ireland. Association for Computational Linguis- tics. | 2307.02477#85 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 86 | the 31st ACM International Conference on Information & Knowledge Management, 2022, pp. 4570â4574.
[25] W. Fan, X. Liu, W. Jin, X. Zhao, J. Tang, and Q. Li, âGraph trend filtering networks for recommendation,â in Proceedings of the 45th International ACM SIGIR Conference on Research and Development in Information Retrieval, 2022, pp. 112â121.
[26] W. Fan, Q. Li, and M. Cheng, âDeep modeling of social relations for recommendation,â in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, no. 1, 2018.
[27] X. Zhao, H. Liu, W. Fan, H. Liu, J. Tang, and C. Wang, âAutoloss: Automated loss function search in recommendations,â in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, 2021, pp. 3959â3967.
[28] X. Zhao, H. Liu, W. Fan, H. Liu, J. Tang, C. Wang, M. Chen, X. Zheng, X. Liu, and X. Yang, âAutoemb: Automated embedding dimensionality search in streaming recommendations,â in 2021 IEEE International Conference on Data Mining (ICDM). IEEE, 2021, pp. 896â905. | 2307.02046#86 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 86 | Aitor Lewkowycz, Anders Andreassen, David Dohan, Ethan Dyer, Henryk Michalewski, Vinay Ramasesh, Ambrose Slone, Cem Anil, Imanol Schlag, Theo Gutman-Solo, Yuhuai Wu, Behnam Neyshabur, Guy Gur-Ari, and Vedant Misra. 2022. Solving quantitative reasoning problems with language models.
Belinda Li, Jane Yu, Madian Khabsa, Luke Zettle- moyer, Alon Halevy, and Jacob Andreas. 2022. Quantifying adaptability in pre-trained language models with 500 tasks. In Proceedings of the 2022 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 4696â4715, Seattle, United States. Association for Computational Linguistics.
Belinda Z. Li, Maxwell Nye, and Jacob Andreas. 2021. Implicit representations of meaning in neural language models. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Process- ing (Volume 1: Long Papers), pages 1813â1827, Online. Association for Computational Linguis- tics. | 2307.02477#86 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 87 | [29] F. Vasile, E. Smirnova, and A. Conneau, âMeta-prod2vec: Product embeddings using side-information for recommendation,â in Proceedings of the 10th ACM conference on recommender systems, 2016, pp. 225â232.
[30] X. He, L. Liao, H. Zhang, L. Nie, X. Hu, and T.-S. Chua, âNeural collaborative filtering,â in Proceedings of the 26th international conference on world wide web, 2017, pp. 173â182.
[31] R. Ying, R. He, K. Chen, P. Eksombatchai, W. L. Hamilton, and J. Leskovec, âGraph convolutional neural networks for web-scale recommender systems,â in Proceedings of the 24th ACM SIGKDD international conference on knowledge discovery & data mining, 2018, pp. 974â983.
[32] Y. Ma and J. Tang, Deep learning on graphs. Cambridge University Press, 2021. | 2307.02046#87 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 87 | Jiaang Li, Yova Kementchedjhieva, and Anders Sø- gaard. 2023a. Implications of the convergence of language and vision model geometries. ArXiv preprint, abs/2302.06555.
Jiaxuan Li, Lang Yu, and Allyson Ettinger. 2023b. Counterfactual reasoning: Testing language modelsâ understanding of hypothetical scenar- ios.
Kenneth Li, Aspen K Hopkins, David Bau, Fer- nanda Viégas, Hanspeter Pfister, and Martin Wat- tenberg. 2023c. Emergent world representations: Exploring a sequence model trained on a syn- thetic task. In The Eleventh International Con- ference on Learning Representations.
Weixian Waylon Li, Yftah Ziser, Maximin Coavoux, and Shay B. Cohen. 2023d. BERT is not the count: Learning to match mathemat- ical statements with proofs. In Proceedings of the 17th Conference of the European Chapter of the Association for Computational Linguistics, pages 3581â3593, Dubrovnik, Croatia. Associa- tion for Computational Linguistics.
Tal Linzen and Marco Baroni. 2021. Syntactic structure from deep learning. Annual Review of Linguistics, 7:195â212. | 2307.02477#87 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 88 | [32] Y. Ma and J. Tang, Deep learning on graphs. Cambridge University Press, 2021.
[33] T. Derr, Y. Ma, W. Fan, X. Liu, C. Aggarwal, and J. Tang, âEpidemic graph convolutional network,â in Proceedings of the 13th International Conference on Web Search and Data Mining (WSDM), 2020, pp. 160â168.
[34] C. Chen, M. Zhang, Y. Liu, and S. Ma, âNeural attentional rating regression with review-level explanations,â in Proceedings of the 2018 world wide web conference, 2018, pp. 1583â1592. | 2307.02046#88 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 88 | Tal Linzen and Marco Baroni. 2021. Syntactic structure from deep learning. Annual Review of Linguistics, 7:195â212.
Inbal Magar and Roy Schwartz. 2022. Data con- tamination: From memorization to exploitation. In Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics (Vol- ume 2: Short Papers), pages 157â165, Dublin, Ireland. Association for Computational Linguis- tics.
Kyle Mahowald, Anna A Ivanova, Idan A Blank, Nancy Kanwisher, Joshua B Tenenbaum, and Evelina Fedorenko. 2023. Dissociating lan- in large language mod- guage and thought els: a cognitive perspective. ArXiv preprint, abs/2301.06627.
Kamil Malinka, Martin PereÅ¡Ãni, Anton Firc, OndËrej HujËnák, and Filip JanuÅ¡. 2023. On the educational impact of ChatGPT: Is artificial in- telligence ready to obtain a university degree?
Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1993. Building a large annotated corpus of English: The Penn Treebank. Computational Linguistics, 19(2):313â330. | 2307.02477#88 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 89 | [35] F. Wu, Y. Qiao, J.-H. Chen, C. Wu, T. Qi, J. Lian, D. Liu, X. Xie, J. Gao, W. Wu et al., âMind: A large-scale dataset for news recommendation,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 3597â3606. [36] C. Wu, F. Wu, Y. Huang, and X. Xie, âPersonalized news recommendation: Methods and challenges,â ACM Transactions on Information Systems, vol. 41, no. 1, pp. 1â50, 2023. S. Dongre and J. Agrawal, âDeep learning-based drug recommendation and adr detection healthcare model on social media,â IEEE Transactions on Computational Social Systems, 2023. | 2307.02046#89 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 89 | John McCarthy. 1959. Programs with common sense. In Proceedings of the Teddington Confer- ence on the Mechanization of Thought Processes, pages 75â91.
Ian R. McKenzie, Alexander Lyzhov, Michael Pieler, Alicia Parrish, Aaron Mueller, Ameya Prabhu, Euan McLean, Aaron Kirtland, Alexis Ross, Alisa Liu, Andrew Gritsevskiy, Daniel Wurgaft, Derik Kauffman, Gabriel Recchia, Ji- acheng Liu, Joe Cavanagh, Max Weiss, Sicong Huang, The Floating Droid, Tom Tseng, Tomasz Korbak, Xudong Shen, Yuhui Zhang, Zhengping Zhou, Najoung Kim, Samuel R. Bowman, and Ethan Perez. 2023. Inverse scaling: When big- ger isnât better.
Antonio Valerio Miceli-Barone, Fazl Barez, Ioan- nis Konstas, and Shay B. Cohen. 2023. The larger they are, the harder they fail: Language models do not recognize identifier swaps in Python. | 2307.02477#89 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 90 | [38] F. Sun, J. Liu, J. Wu, C. Pei, X. Lin, W. Ou, and P. Jiang, âBert4rec: Sequential recommendation with bidirectional encoder representations from transformer,â in Proceedings of the 28th ACM international conference on information and knowledge management, 2019, pp. 1441â1450. J. Liu, C. Liu, R. Lv, K. Zhou, and Y. Zhang, âIs chatgpt a good recommender? a preliminary study,â arXiv preprint arXiv:2304.10149, 2023. J. Devlin, M.-W. Chang, K. Lee, and K. Toutanova, âBert: Pre-training of deep bidirectional transformers for language understanding,â arXiv preprint arXiv:1810.04805, 2018.
[41] A. Radford, K. Narasimhan, T. Salimans, I. Sutskever et al., âImproving language understanding by generative pre-training,â 2018. | 2307.02046#90 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 90 | Swaroop Mishra, Daniel Khashabi, Chitta Baral, and Hannaneh Hajishirzi. 2022. Cross-task gen- eralization via natural language crowdsourcing In Proceedings of the 60th An- instructions. nual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 3470â3487, Dublin, Ireland. Association for Computational Linguistics.
Dimitri Coelho Mollo and Raphaël Millière. 2023. The vector grounding problem. ArXiv preprint, abs/2304.01481.
Razieh Nabi and Ilya Shpitser. 2018. Fair infer- ence on outcomes. In Proceedings of the Thirty- Second AAAI Conference on Artificial Intelli- gence, (AAAI-18), the 30th innovative Applica- tions of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Ad- vances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 1931â1940. AAAI Press. | 2307.02477#90 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 91 | [42] C. Raffel, N. Shazeer, A. Roberts, K. Lee, S. Narang, M. Matena, Y. Zhou, W. Li, and P. J. Liu, âExploring the limits of transfer learning with a unified text-to-text transformer,â The Journal of Machine Learning Research, vol. 21, no. 1, pp. 5485â5551, 2020. [43] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Å. Kaiser, and I. Polosukhin, âAttention is all you need,â Advances in neural information processing systems, vol. 30, 2017.
[44] Z. Zhang, G. Zhang, B. Hou, W. Fan, Q. Li, S. Liu, Y. Zhang, and S. Chang, âCertified robustness for large language models with self-denoising,â arXiv preprint arXiv:2307.07171, 2023. | 2307.02046#91 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 91 | Joakim Nivre, Marie-Catherine de Marneffe, Filip Ginter, Yoav Goldberg, Jan HajiËc, Christopher D. Manning, Ryan McDonald, Slav Petrov, Sampo Pyysalo, Natalia Silveira, Reut Tsarfaty, and Daniel Zeman. 2016. Universal Dependencies v1: A multilingual treebank collection. In Proceedings of the Tenth International Confer- ence on Language Resources and Evaluation (LRECâ16), pages 1659â1666, Portorož, Slove- nia. European Language Resources Association (ELRA).
Harsha Nori, Nicholas King, Scott Mayer McKin- ney, Dean Carignan, and Eric Horvitz. 2023. Ca- pabilities of GPT-4 on medical challenge prob- lems.
Maxwell Nye, Anders Johan Andreassen, Guy Gur-Ari, Henryk Michalewski, Jacob Austin, David Bieber, David Dohan, Aitor Lewkowycz, Maarten Bosma, David Luan, Charles Sutton, and Augustus Odena. 2021. Show your work: Scratchpads for intermediate computation with language models.
OpenAI. 2023. GPT-4 technical report. | 2307.02477#91 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 92 | [45] R. Thoppilan, D. De Freitas, J. Hall, N. Shazeer, A. Kulshreshtha, H.-T. Cheng, A. Jin, T. Bos, L. Baker, Y. Du et al., âLamda: Language models for dialog applications,â arXiv preprint arXiv:2201.08239, 2022.
[46] A. Chowdhery, S. Narang, J. Devlin, M. Bosma, G. Mishra, A. Roberts, P. Barham, H. W. Chung, C. Sutton, S. Gehrmann et al., âPalm: Scaling language modeling with pathways,â arXiv preprint arXiv:2204.02311, 2022.
[47] W.-L. Chiang, Z. Li, Z. Lin, Y. Sheng, Z. Wu, H. Zhang, L. Zheng, S. Zhuang, Y. Zhuang, J. E. Gonzalez et al., âVicuna: An open- source chatbot impressing gpt-4 with 90%* chatgpt quality,â See https://vicuna. lmsys. org (accessed 14 April 2023), 2023. | 2307.02046#92 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 92 | OpenAI. 2023. GPT-4 technical report.
Roma Patel and Ellie Pavlick. 2022. Mapping lan- guage models to grounded conceptual spaces. In The Tenth International Conference on Learn- ing Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net.
Judea Pearl. 1988. Probabilistic Reasoning in In- telligent Systems: Networks of Plausible Infer- ence. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA.
Judea Pearl. 2009. Causality, 2nd edition. Cam- bridge University Press.
Steven Piantadosi and Felix Hill. 2022. Meaning without reference in large language models. In NeurIPS 2022 Workshop on Neuro Causal and Symbolic AI (nCSI).
Tiago Pimentel and Ryan Cotterell. 2021. A Bayesian framework for information-theoretic In Proceedings of the 2021 Confer- probing. ence on Empirical Methods in Natural Lan- guage Processing, pages 2869â2887, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics. | 2307.02477#92 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 93 | [48] H. J. Kim, H. Cho, J. Kim, T. Kim, K. M. Yoo, and S.-g. Lee, âSelf-generated in-context learning: Leveraging auto-regressive language models as a demonstration generator,â arXiv preprint arXiv:2206.08082, 2022.
[49] O. Rubin, J. Herzig, and J. Berant, âLearning to retrieve prompts
for in-context learning,â arXiv preprint arXiv:2112.08633, 2021. J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou, âChain of thought prompting elicits reasoning in large language models,â arXiv preprint arXiv:2201.11903, 2022.
50.
[51] X. Wang, J. Wei, D. Schuurmans, Q. Le, E. Chi, S. Narang, A. Chowdhery, and D. Zhou, âSelf-consistency improves chain thought reasoning in language models,â arXiv preprint of arXiv:2203.11171, 2022. | 2307.02046#93 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 93 | Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning In Proceedings of the 2019 and generation. Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Process- ing (EMNLP-IJCNLP), pages 5043â5053, Hong Kong, China. Association for Computational Linguistics.
Lianhui Qin, Vered Shwartz, Peter West, Chandra Bhagavatula, Jena D. Hwang, Ronan Le Bras,
Antoine Bosselut, and Yejin Choi. 2020. Back to the future: Unsupervised backprop-based decod- ing for counterfactual and abductive common- In Proceedings of the 2020 sense reasoning. Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 794â805, Online. Association for Computational Linguis- tics. | 2307.02477#93 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 94 | [52] E. Zelikman, Y. Wu, J. Mu, and N. Goodman, âStar: Bootstrapping Information reasoning with reasoning,â Advances in Neural Processing Systems, vol. 35, pp. 15 476â15 488, 2022.
[53] H. Fei, B. Li, Q. Liu, L. Bing, F. Li, and T.-S. Chua, âReasoning implicit sentiment with chain-of-thought prompting,â arXiv preprint arXiv:2305.11255, 2023.
[54] Z. Jin and W. Lu, âTab-cot: Zero-shot tabular chain of thought,â arXiv preprint arXiv:2305.17812, 2023. | 2307.02046#94 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 94 | Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agar- wal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, and Ilya Sutskever. 2021. Learning transferable visual models from natural language supervision. In Proceedings of the 38th International Confer- ence on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Pro- ceedings of Machine Learning Research, pages 8748â8763. PMLR.
Shauli Ravfogel, Yoav Goldberg, and Tal Linzen. 2019. Studying the inductive biases of RNNs with synthetic variations of natural languages. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3532â3542, Minneapolis, Minnesota. As- sociation for Computational Linguistics. | 2307.02477#94 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 95 | [55] E. Kasneci, K. SeÃler, S. K ¨uchemann, M. Bannert, D. Dementieva, F. Fischer, U. Gasser, G. Groh, S. G ¨unnemann, E. H ¨ullermeier et al., âChatgpt for good? on opportunities and challenges of large language models for education,â Learning and Individual Differences, vol. 103, p. 102274, 2023. S. Wu, O. Irsoy, S. Lu, V. Dabravolski, M. Dredze, S. Gehrmann, P. Kambadur, D. Rosenberg, and G. Mann, âBloomberggpt: A large language model for finance,â arXiv preprint arXiv:2303.17564, 2023.
[56]
[57] W.-C. Kang, J. Ni, N. Mehta, M. Sathiamoorthy, L. Hong, E. Chi, and D. Z. Cheng, âDo llms understand user preferences? evaluating llms on user rating prediction,â arXiv preprint arXiv:2305.06474, 2023. | 2307.02046#95 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 95 | Yasaman Razeghi, Robert L Logan IV, Matt Gard- ner, and Sameer Singh. 2022. Impact of pre- training term frequencies on few-shot numerical reasoning. In Findings of the Association for Computational Linguistics: EMNLP 2022, pages 840â854, Abu Dhabi, United Arab Emirates. As- sociation for Computational Linguistics.
Yi Ren, Jinzheng He, Xu Tan, Tao Qin, Zhou Zhao, and Tie-Yan Liu. 2020. Popmag: Pop music ac- companiment generation. In MM â20: The 28th ACM International Conference on Multimedia, Virtual Event / Seattle, WA, USA, October 12-16, 2020, pages 1198â1206.
Laria Reynolds and Kyle McDonell. 2021. Prompt programming for large language models: ddddd the few-shot paradigm. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA â21, New York, NY, USA. Association for Computing Machinery.
Abulhair Saparov and He He. 2023. Language models are greedy reasoners: A systematic for- mal analysis of chain-of-thought. In Proceed- ings of ICLR. | 2307.02477#95 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 96 | [58] A. Zhiyuli, Y. Chen, X. Zhang, and X. Liang, âBookgpt: A general framework for book recommendation empowered by large language model,â arXiv preprint arXiv:2305.15673, 2023. [59] K. Bao, J. Zhang, Y. Zhang, W. Wang, F. Feng, and X. He, âTallrec: An effective and efficient tuning framework to align large language model with recommendation,â arXiv preprint arXiv:2305.00447, 2023.
[60] Z. Cui, J. Ma, C. Zhou, J. Zhou, and H. Yang, âM6-rec: Generative pretrained language models are open-ended recommender systems,â arXiv preprint arXiv:2205.08084, 2022.
[61] Z. Chen, âPalr: Personalization aware llms for recommendation,â | 2307.02046#96 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 96 | Abulhair Saparov and Tom M. Mitchell. 2022. To- wards general natural language understanding with probabilistic worldbuilding. Transactions of the Association for Computational Linguistics, 10:325â342.
Sebastian Schuster and Christopher D. Manning. 2016. Enhanced English Universal Dependen- cies: An improved representation for natural lan- guage understanding tasks. In Proceedings of the Tenth International Conference on Language Resources and Evaluation (LRECâ16), pages 2371â2378, Portorož, Slovenia. European Lan- guage Resources Association (ELRA).
Roger N Shepard and Jacqueline Metzler. 1971. Mental rotation of three-dimensional objects. Science, 171(3972):701â703.
David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy P. Lillicrap, Karen Si- monyan, and Demis Hassabis. 2017. Mastering chess and Shogi by self-play with a general re- inforcement learning algorithm. ArXiv preprint, abs/1712.01815.
Mark K Singley and John Robert Anderson. 1989. The transfer of cognitive skill. 9. Harvard Uni- versity Press. | 2307.02477#96 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 97 | [61] Z. Chen, âPalr: Personalization aware llms for recommendation,â
arXiv preprint arXiv:2305.07622, 2023. S. Geng, S. Liu, Z. Fu, Y. Ge, and Y. Zhang, âRecommendation as language processing (rlp): A unified pretrain, personalized prompt & predict paradigm (p5),â in Proceedings of the 16th ACM Conference on Recommender Systems, 2022, pp. 299â315.
[62]
[63] X. Wang, K. Zhou, J.-R. Wen, and W. X. Zhao, âTowards unified conversational recommender systems via knowledge-enhanced prompt learning,â in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 1929â 1937.
[64] Y. Deng, W. Zhang, W. Xu, W. Lei, T.-S. Chua, and W. Lam, âA unified multi-task learning framework for multi-goal conversational recommender systems,â ACM Transactions on Information Systems, vol. 41, no. 3, pp. 1â25, 2023. | 2307.02046#97 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 97 | Mark K Singley and John Robert Anderson. 1989. The transfer of cognitive skill. 9. Harvard Uni- versity Press.
Dominik Sobania, Martin Briesch, Carol Hanna, and Justyna Petke. 2023. An analysis of the au- tomatic bug fixing performance of ChatGPT. In Proceedings of the 45th International Confer- ence on Software Engineering.
Anders Søgaard. 2023. Grounding the vector space of an octopus: Word meaning from raw text. Minds and Machines, 33(1):33â54.
Alessandro Sordoni, Xingdi Yuan, Marc-Alexandre Côté, Matheus Pereira, Adam Trischler, Ziang Xiao, Arian Hosseini, Friederike Niedtner, and Nicolas Le Roux. 2023. Deep language net- works: Joint prompt training of stacked llms using variational inference. ArXiv preprint, abs/2306.12509. | 2307.02477#97 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 98 | [65] W. Hua, S. Xu, Y. Ge, and Y. Zhang, âHow to index item ids for recommendation foundation models,â arXiv preprint arXiv:2305.06569, 2023. S. Rajput, N. Mehta, A. Singh, R. H. Keshavan, T. Vu, L. Heldt, L. Hong, Y. Tay, V. Q. Tran, J. Samost et al., âRecommender systems with generative retrieval,â arXiv preprint arXiv:2305.05065, 2023. [67] Z. Yuan, F. Yuan, Y. Song, Y. Li, J. Fu, F. Yang, Y. Pan, and Y. Ni, âWhere to go next for recommender systems? id-vs. modality-based recommender models revisited,â arXiv preprint arXiv:2303.13835, 2023.
15
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023 | 2307.02046#98 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 98 | Aarohi Srivastava, Abhinav Rastogi, Abhishek Rao, Abu Awal Md Shoeb, Abubakar Abid, Adam Fisch, Adam R. Brown, Adam San- toro, Aditya Gupta, Adrià Garriga-Alonso, Ag- nieszka Kluska, Aitor Lewkowycz, Akshat Agar- wal, Alethea Power, Alex Ray, Alex Warstadt, Alexander W. Kocurek, Ali Safaya, Ali Tazarv, Alice Xiang, Alicia Parrish, Allen Nie, Aman Hussain, Amanda Askell, Amanda Dsouza, Am- brose Slone, Ameet Rahane, Anantharaman S. Iyer, Anders Andreassen, Andrea Madotto, An- drea Santilli, Andreas Stuhlmüller, Andrew Dai, Andrew La, Andrew Lampinen, Andy Zou, An- gela Jiang, Angelica Chen, Anh Vuong, Ani- mesh Gupta, Anna Gottardi, Antonio Norelli, Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, | 2307.02477#98 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 99 | 15
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
[68] Y. Hou, S. Mu, W. X. Zhao, Y. Li, B. Ding, and J.-R. Wen, âTowards universal sequence representation learning for recommender systems,â in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 585â593.
[69] R. Li, W. Deng, Y. Cheng, Z. Yuan, J. Zhang, and F. Yuan, âExploring the upper limits of text-based collaborative filtering using large language models: Discoveries and insights,â arXiv preprint arXiv:2305.11700, 2023.
J. McAuley, and W. X. Zhao, âLearning vector-quantized item representation for transferable sequential recommenders,â in Proceedings of the ACM Web Conference 2023, 2023, pp. 1162â1171. | 2307.02046#99 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 99 | Anu Venkatesh, Arash Gholamidavoodi, Arfa Tabassum, Arul Menezes, Arun Kirubarajan, Asher Mullokandov, Ashish Sabharwal, Austin Herrick, Avia Efrat, Aykut Erdem, Ayla Karaka¸s, B. Ryan Roberts, Bao Sheng Loe, Barret Zoph, BartÅomiej Bojanowski, Batuhan Ãzyurt, Behnam Hedayatnia, Behnam Neyshabur, Ben- jamin Inden, Benno Stein, Berk Ekmekci, Bill Yuchen Lin, Blake Howald, Bryan Orin- ion, Cameron Diao, Cameron Dour, Catherine Stinson, Cedrick Argueta, César Ferri RamÃrez, Chandan Singh, Charles Rathkopf, Chenlin Meng, Chitta Baral, Chiyu Wu, Chris Callison- Burch, Chris Waites, Christian Voigt, Christo- pher D. Manning, Christopher Potts, Cindy Ramirez, Clara E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, | 2307.02477#99 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 100 | [71] Z. Fan, Z. Liu, S. Heinecke, J. Zhang, H. Wang, C. Xiong, and P. S. Yu, âZero-shot item-based recommendation via multi-task product knowledge graph pre-training,â arXiv preprint arXiv:2305.07633, 2023.
[72] K. Shin, H. Kwak, K.-M. Kim, M. Kim, Y.-J. Park, J. Jeong, and S. Jung, âOne4all user representation for recommender systems in e-commerce,â arXiv preprint arXiv:2106.00573, 2021.
[73] C. Wu, F. Wu, T. Qi, J. Lian, Y. Huang, and X. Xie, âPtum: Pre-training user model from unlabeled user behaviors via self- supervision,â arXiv preprint arXiv:2010.01494, 2020. | 2307.02046#100 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 100 | E. Rivera, Clemencia Siro, Colin Raffel, Courtney Ashcraft, Cristina Garbacea, Damien Sileo, Dan Garrette, Dan Hendrycks, Dan Kilman, Dan Roth, Daniel Freeman, Daniel Khashabi, Daniel Levy, Daniel Moseguà González, Danielle Perszyk, Danny Hernan- dez, Danqi Chen, Daphne Ippolito, Dar Gilboa, David Dohan, David Drakard, David Jurgens, Debajyoti Datta, Deep Ganguli, Denis Emelin, Denis Kleyko, Deniz Yuret, Derek Chen, Derek Tam, Dieuwke Hupkes, Diganta Misra, Dilyar Buzan, Dimitri Coelho Mollo, Diyi Yang, Dong- Ho Lee, Dylan Schrader, Ekaterina Shutova, Ekin Dogus Cubuk, Elad Segal, Eleanor Hager- man, Elizabeth Barnes, Elizabeth Donoway, Ellie Pavlick, Emanuele Rodola, Emma Lam, Eric Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, | 2307.02477#100 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 101 | [74] L. Friedman, S. Ahuja, D. Allen, T. Tan, H. Sidahmed, C. Long, J. Xie, G. Schubiner, A. Patel, H. Lara et al., âLeveraging large language models in conversational recommender systems,â arXiv preprint arXiv:2305.07961, 2023.
[75] T. Shen, J. Li, M. R. Bouadjenek, Z. Mai, and S. Sanner, âTowards understanding and mitigating unintended biases in language model-driven conversational recommendation,â Information Processing & Management, vol. 60, no. 1, p. 103139, 2023. J. Wang, F. Yuan, M. Cheng, J. M. Jose, C. Yu, B. Kong, Z. Wang, B. Hu, and Z. Li, âTransrec: Learning transferable recommendation from mixture-of-modality feedback,â arXiv preprint arXiv:2206.06190, 2022. | 2307.02046#101 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 101 | Chu, Eric Tang, Erkut Erdem, Ernie Chang, Ethan A. Chi, Ethan Dyer, Ethan Jerzak, Ethan Kim, Eunice Engefu Manyasi, Evgenii Zheltonozhskii, Fanyue Xia, Fatemeh Siar, Fer- nando MartÃnez-Plumed, Francesca Happé, Francois Chollet, Frieda Rong, Gaurav Mishra, Genta Indra Winata, Gerard de Melo, Germán Kruszewski, Giambattista Parascandolo, Gior- gio Mariani, Gloria Wang, Gonzalo Jaimovitch- López, Gregor Betz, Guy Gur-Ari, Hana Gal- ijasevic, Hannah Kim, Hannah Rashkin, Han- naneh Hajishirzi, Harsh Mehta, Hayden Bogar, Henry Shevlin, Hinrich Schütze, Hiromu Yakura, Hongming Zhang, Hugh Mee Wong, Ian Ng, Isaac Noble, Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James | 2307.02477#101 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 102 | [77] A. G. Carranza, R. Farahani, N. Ponomareva, A. Kurakin, M. Jagielski, and M. Nasr, âPrivacy-preserving recommender systems with synthetic query generation using differentially private large language models,â arXiv preprint arXiv:2305.05973, 2023.
[78] Z. Zheng, Z. Qiu, X. Hu, L. Wu, H. Zhu, and H. Xiong, âGenerative job recommendations with large language model,â arXiv preprint arXiv:2307.02157, 2023.
[79] H. Kim, J. Jeong, K.-M. Kim, D. Lee, H. D. Lee, D. Seo, J. Han, D. W. Park, J. A. Heo, and R. Y. Kim, âIntent-based product collections for e-commerce using pretrained language models,â in 2021 International Conference on Data Mining Workshops (ICDMW). IEEE, 2021, pp. 228â237. | 2307.02046#102 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 102 | Jaap Jumelet, Jack Geissinger, Jackson Kernion, Jacob Hilton, Jaehoon Lee, Jaime Fernández Fisac, James B. Simon, James Koppel, James Zheng, James Zou, Jan Koco´n, Jana Thompson, Janelle Wingfield, Jared Ka- plan, Jarema Radom, Jascha Sohl-Dickstein, Ja- son Phang, Jason Wei, Jason Yosinski, Jekate- rina Novikova, Jelle Bosscher, Jennifer Marsh, Jeremy Kim, Jeroen Taal, Jesse Engel, Jesu- joba Alabi, Jiacheng Xu, Jiaming Song, Jil- lian Tang, Joan Waweru, John Burden, John Miller, John U. Balis, Jonathan Batchelder, Jonathan Berant, Jörg Frohberg, Jos Rozen, Jose Hernandez-Orallo, Joseph Boudeman, Joseph Guerr, Joseph Jones, Joshua B. Tenenbaum, Joshua S. Rule, Joyce Chua, Kamil Kanclerz, Karen Livescu, Karl Krauth, Karthik Gopalakr- ishnan, Katerina Ignatyeva, Katja Markert, Kaus- tubh D. Dhole, Kevin | 2307.02477#102 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 103 | [80] Z. Mao, H. Wang, Y. Du, and K.-f. Wong, âUnitrec: A unified text-to-text transformer and joint contrastive learning framework for text-based recommendation,â arXiv preprint arXiv:2305.15756, 2023.
[81] L. Wu, Z. Qiu, Z. Zheng, H. Zhu, and E. Chen, âExploring large language model for graph data understanding in online job recommendations,â arXiv preprint arXiv:2307.05722, 2023. J. D. M.-W. C. Kenton and L. K. Toutanova, âBert: Pre-training of deep bidirectional transformers for language understanding,â in Proceedings of NAACL-HLT, 2019.
[83] M. Lewis, Y. Liu, N. Goyal, M. Ghazvininejad, A. Mohamed, O. Levy, V. Stoyanov, and L. Zettlemoyer, âBart: Denoising sequence-to-sequence pre-training for natural language gener- ation, translation, and comprehension,â in Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, 2020, pp. 7871â7880. | 2307.02046#103 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 103 | Karl Krauth, Karthik Gopalakr- ishnan, Katerina Ignatyeva, Katja Markert, Kaus- tubh D. Dhole, Kevin Gimpel, Kevin Omondi, Kory Mathewson, Kristen Chiafullo, Ksenia Shkaruta, Kumar Shridhar, Kyle McDonell, Kyle Richardson, Laria Reynolds, Leo Gao, Li Zhang, Liam Dugan, Lianhui Qin, Lidia Contreras-Ochando, Louis-Philippe Morency, Luca Moschella, Lucas Lam, Lucy Noble, Ludwig Schmidt, Luheng He, Luis Oliveros Colón, Luke Metz, Lütfi Kerem ¸Senel, Maarten Bosma, Maarten Sap, Maartje ter Hoeve, Ma- heen Farooqi, Manaal Faruqui, Mantas Mazeika, Marco Baturan, Marco Marelli, Marco Maru, Maria Jose RamÃrez Quintana, Marie Tolkiehn, Mario Giulianelli, Martha Lewis, Martin Pot- thast, Matthew L. Leavitt, Matthias Hagen, Má- tyás Schubert, Medina Orduna | 2307.02477#103 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 104 | [84] N. Houlsby, A. Giurgiu, S. Jastrzebski, B. Morrone, Q. De Larous- silhe, A. Gesmundo, M. Attariyan, and S. Gelly, âParameter- efficient transfer learning for nlp,â in International Conference on Machine Learning. PMLR, 2019, pp. 2790â2799.
[85] E. J. Hu, Y. Shen, P. Wallis, Z. Allen-Zhu, Y. Li, S. Wang, L. Wang, and W. Chen, âLora: Low-rank adaptation of large language models,â arXiv preprint arXiv:2106.09685, 2021.
[86] T. Gao, A. Fisch, and D. Chen, âMaking pre-trained language models better few-shot learners,â arXiv preprint arXiv:2012.15723, 2020. J. Wei, M. Bosma, V. Y. Zhao, K. Guu, A. W. Yu, B. Lester, N. Du, A. M. Dai, and Q. V. Le, âFinetuned language models are zero-shot learners,â arXiv preprint arXiv:2109.01652, 2021.
[88]
89 | 2307.02046#104 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 104 | Martha Lewis, Martin Pot- thast, Matthew L. Leavitt, Matthias Hagen, Má- tyás Schubert, Medina Orduna Baitemirova, Melody Arnaud, Melvin McElrath, Michael A. Yee, Michael Cohen, Michael Gu, Michael Ivanitskiy, Michael Starritt, Michael Strube, MichaÅ SwËedrowski, Michele Bevilacqua, Michi- hiro Yasunaga, Mihir Kale, Mike Cain, Mimee Xu, Mirac Suzgun, Mitch Walker, Mo Tiwari, Mohit Bansal, Moin Aminnaseri, Mor Geva, Mozhdeh Gheini, Mukund Varma T, Nanyun | 2307.02477#104 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 105 | [88]
89
S. Dai, N. Shao, H. Zhao, W. Yu, Z. Si, C. Xu, Z. Sun, X. Zhang, and J. Xu, âUncovering chatgptâs capabilities in recommender systems,â arXiv preprint arXiv:2305.02182, 2023. J. Zhang, K. Bao, Y. Zhang, W. Wang, F. Feng, and X. He, recommendation? evaluating fairness âIs chatgpt in large language model recommendation,â arXiv preprint arXiv:2305.07609, 2023.
[90] Q. Liu, N. Chen, T. Sakai, and X.-M. Wu, âA first look at llm-powered generative news recommendation,â arXiv preprint arXiv:2305.06566, 2023.
[91] Y. Hou, J. Zhang, Z. Lin, H. Lu, R. Xie, J. McAuley, and W. X. Zhao, âLarge language models are zero-shot rankers for recommender systems,â arXiv preprint arXiv:2305.08845, 2023. | 2307.02046#105 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 105 | Peng, Nathan A. Chi, Nayeon Lee, Neta Gur-Ari Krakover, Nicholas Cameron, Nicholas Roberts, Nick Doiron, Nicole Martinez, Nikita Nan- gia, Niklas Deckers, Niklas Muennighoff, Ni- tish Shirish Keskar, Niveditha S. Iyer, Noah Con- stant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eck- ersley, Phu Mon Htut, Pinyu Hwang, Piotr MiÅkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, | 2307.02477#105 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 106 | [92] X. Wang, X. Tang, W. X. Zhao, J. Wang, and J.-R. Wen, âRethinking the evaluation for conversational recommendation in the era of large language models,â arXiv preprint arXiv:2305.13112, 2023.
[93] W. Wang, X. Lin, F. Feng, X. He, and T.-S. Chua, âGenerative rec- ommendation: Towards next-generation recommender paradigm,â arXiv preprint arXiv:2304.03516, 2023.
[94] Y. Du, D. Luo, R. Yan, H. Liu, Y. Song, H. Zhu, and J. Zhang, âEnhancing job recommendation through llm-based generative adversarial networks,â arXiv preprint arXiv:2307.10747, 2023. [95] L. Wang and E.-P. Lim, âZero-shot next-item recommendation using large pretrained language models,â arXiv preprint arXiv:2304.03153, 2023. | 2307.02046#106 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 106 | Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramon Risco, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdi- nov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruet- ter, Samuel R. Bowman, Samuel S. Schoenholz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Se- bastian Bischoff, Sebastian Gehrmann, Sebas- tian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, | 2307.02477#106 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 107 | [96] H. Lyu, S. Jiang, H. Zeng, Y. Xia, and J. Luo, âLlm-rec: Personalized recommendation via prompting large language models,â arXiv preprint arXiv:2307.15780, 2023.
[97] M. Leszczynski, R. Ganti, S. Zhang, K. Balog, F. Radlinski, F. Pereira, and A. T. Chaganty, âGenerating synthetic data for conversational music recommendation using random walks and language models,â arXiv preprint arXiv:2301.11489, 2023.
[98] X. Wu, H. Zhou, W. Yao, X. Huang, and N. Liu, âTowards personalized cold-start recommendation with prompts,â arXiv preprint arXiv:2306.17256, 2023.
[99] K. Christakopoulou, A. Lalama, C. Adams, I. Qu, Y. Amir, S. Chucri, P. Vollucci, F. Soldo, D. Bseiso, S. Scodel et al., âLarge language models for user interest journeys,â arXiv preprint arXiv:2305.15498, 2023. | 2307.02046#107 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 107 | Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Sia- mak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanis- las Dehaene, Stefan Divic, Stefano Ermon, Stella Biderman, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Sum- mer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Titus Tunduny, Tobias Gersten- berg, Trenton | 2307.02477#107 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 108 | [100] S. Sanner, K. Balog, F. Radlinski, B. Wedin, and L. Dixon, âLarge language models are competitive near cold-start recommenders language-and item-based preferences,â arXiv preprint for arXiv:2307.14225, 2023.
[101] J. Zhang, âGraph-toolformer: To empower llms with graph reasoning ability via prompt augmented by chatgpt,â arXiv preprint arXiv:2304.11116, 2023.
[102] Y. Xi, W. Liu, J. Lin, J. Zhu, B. Chen, R. Tang, W. Zhang, R. Zhang, and Y. Yu, âTowards open-world recommendation with knowledge augmentation from large language models,â arXiv preprint arXiv:2306.10933, 2023.
[103] G. Lin and Y. Zhang, âSparks of artificial general recom- mender (agr): Early experiments with chatgpt,â arXiv preprint arXiv:2305.04518, 2023. | 2307.02046#108 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02046 | 109 | [104] W. Hua, Y. Ge, S. Xu, J. Ji, and Y. Zhang, âUp5: Unbiased foundation model for fairness-aware recommendation,â arXiv preprint arXiv:2305.12090, 2023.
[105] J. Ji, Z. Li, S. Xu, W. Hua, Y. Ge, J. Tan, and Y. Zhang, âGenrec: Large language model for generative recommendation,â arXiv e-prints, pp. arXivâ2307, 2023.
[106] Y. Yao, Z. Li, and H. Zhao, âBeyond chain-of-thought, effective graph-of-thought reasoning in large language models,â arXiv preprint arXiv:2305.16582, 2023.
[107] T. Shin, Y. Razeghi, R. L. Logan IV, E. Wallace, and S. Singh, âAutoprompt: Eliciting knowledge from language models with automatically generated prompts,â arXiv preprint arXiv:2010.15980, 2020. | 2307.02046#109 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 109 | William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. 2023. Beyond the imitation game: Quantifying and ex- trapolating the capabilities of language models.
Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT good at search? Investigating large language models as re-ranking agent.
Oyvind Tafjord, Bhavana Dalvi, and Peter Clark. 2021. ProofWriter: Generating implications, proofs, and abductive statements over natural In Findings of the Association for language. Computational Linguistics: ACL-IJCNLP 2021, pages 3621â3634, Online. Association for Com- putational Linguistics. | 2307.02477#109 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 110 | [108] Q. Dong, L. Li, D. Dai, C. Zheng, Z. Wu, B. Chang, X. Sun, J. Xu, and Z. Sui, âA survey for in-context learning,â arXiv preprint arXiv:2301.00234, 2022.
[109] Y. Wu, R. Xie, Y. Zhu, F. Zhuang, X. Zhang, L. Lin, and Q. He, âPersonalized prompts for sequential recommendation,â arXiv preprint arXiv:2205.09666, 2022.
[110] L. Guo, C. Wang, X. Wang, L. Zhu, and H. Yin, âAutomated prompting for non-overlapping cross-domain sequential recom- mendation,â arXiv preprint arXiv:2304.04218, 2023.
16
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
[111] P. Manakul, A. Liusie, and M. J. Gales, âSelfcheckgpt: Zero- resource black-box hallucination detection for generative large language models,â arXiv preprint arXiv:2303.08896, 2023. | 2307.02046#110 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 110 | Xiaojuan Tang, Zilong Zheng, Jiaqi Li, Fanxu Meng, Song-Chun Zhu, Yitao Liang, and Muhan Zhang. 2023. Large language models are in- context semantic reasoners rather than symbolic reasoners.
Christian Terwiesch. 2023. Would Chat GPT3 get a Wharton MBA? A prediction based on its per- formance in the operations management course.
Nenad Tomasev, Ulrich Paquet, Demis Hass- abis, and Vladimir Kramnik. 2020. Assess- ing game balance with AlphaZero: Exploring alternative rule sets in chess. ArXiv preprint, abs/2009.04374.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. 2023. LLaMA: Open and efficient foun- dation language models.
Tomer D. Ullman and Joshua B. Tenenbaum. 2020. Bayesian models of conceptual develop- ment: Learning as building models of the world. Annual Review of Developmental Psychology, 2(1):533â558. | 2307.02477#110 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 111 | [112] N. McKenna, T. Li, L. Cheng, M. J. Hosseini, M. Johnson, and M. Steedman, âSources of hallucination by large language models on inference tasks,â arXiv preprint arXiv:2305.14552, 2023. [113] Z. Ji, N. Lee, R. Frieske, T. Yu, D. Su, Y. Xu, E. Ishii, Y. J. Bang, A. Madotto, and P. Fung, âSurvey of hallucination in natural language generation,â ACM Computing Surveys, vol. 55, no. 12, pp. 1â38, 2023.
[114] T. Y. Zhuo, Y. Huang, C. Chen, and Z. Xing, âExploring ai ethics of chatgpt: A diagnostic analysis,â arXiv preprint arXiv:2301.12867, 2023.
[115] H. Liu, Y. Wang, W. Fan, X. Liu, Y. Li, S. Jain, Y. Liu, A. Jain, and J. Tang, âTrustworthy ai: A computational perspective,â ACM Transactions on Intelligent Systems and Technology, 2022. | 2307.02046#111 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 111 | Steven G Vandenberg and Allan R Kuse. 1978. three- Mental dimensional spatial visualization. Perceptual and motor skills, 47(2):599â604.
Victor Veitch, Alexander DâAmour, Steve Yad- lowsky, and Jacob Eisenstein. 2021. Counter- factual invariance to spurious correlations in text classification. In Advances in Neural Informa- tion Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 16196â16208.
Kai Von Fintel and Irene Heim. 2011. Intensional semantics. Unpublished Lecture Notes.
Boshi Wang, Xiang Deng, and Huan Sun. 2022a. Iteratively prompt pre-trained language models for chain of thought. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 2714â2730, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics.
Xuezhi Wang, Jason Wei, Dale Schuurmans, Quoc Le, Ed Chi, Sharan Narang, Aakanksha Chowd- hery, and Denny Zhou. 2023a. Self-consistency improves chain of thought reasoning in language models. In Proceedings of ICLR. | 2307.02477#111 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 112 | [116] W. Fan, T. Derr, X. Zhao, Y. Ma, H. Liu, J. Wang, J. Tang, and Q. Li, âAttacking black-box recommendations via copying cross-domain user profiles,â in 2021 IEEE 37th International Conference on Data Engineering (ICDE).
[117] J. Chen, W. Fan, G. Zhu, X. Zhao, C. Yuan, Q. Li, and Y. Huang, âKnowledge-enhanced black-box attacks for recommendations,â in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, 2022, pp. 108â117.
[118] W. Fan, X. Zhao, Q. Li, T. Derr, Y. Ma, H. Liu, J. Wang, and J. Tang, âAdversarial attacks for black-box recommender systems via copying transferable cross-domain user profiles,â IEEE Transactions on Knowledge and Data Engineering, 2023.
[119] W. Fan, W. Jin, X. Liu, H. Xu, X. Tang, S. Wang, Q. Li, J. Tang, J. Wang, and C. Aggarwal, âJointly attacking graph neural network and its explanations,â in 2023 IEEE 39th International Conference on Data Engineering (ICDE). IEEE, 2023. | 2307.02046#112 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02046 | 113 | [120] OpenAI, âGpt-4 technical report,â OpenAI, 2023. [121] J. Tang, X. Du, X. He, F. Yuan, Q. Tian, and T.-S. Chua, âAdversarial training towards robust multimedia recommender system,â IEEE Transactions on Knowledge and Data Engineering, vol. 32, no. 5, pp. 855â867, 2019.
[122] G. Zhang, Y. Zhang, Y. Zhang, W. Fan, Q. Li, S. Liu, and S. Chang, âFairness reprogramming,â in Thirty-sixth Conference on Neural Information Processing Systems, 2022.
[123] H. Liu, J. Dacon, W. Fan, H. Liu, Z. Liu, and J. Tang, âDoes gender matter? towards fairness in dialogue systems,â in Proceedings of the 28th International Conference on Computational Linguistics, 2020, pp. 4403â4416. | 2307.02046#113 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 113 | Yizhong Wang, Swaroop Mishra, Pegah Alipoor- molabashi, Yeganeh Kordi, Amirreza Mirzaei, Atharva Naik, Arjun Ashok, Arut Selvan Dhanasekaran, Anjana Arunkumar, David Stap, Eshaan Pathak, Giannis Karamanolakis, Haizhi Lai, Ishan Purohit, Ishani Mondal, Jacob An- derson, Kirby Kuznia, Krima Doshi, Kun- tal Kumar Pal, Maitreya Patel, Mehrad Morad- shahi, Mihir Parmar, Mirali Purohit, Neeraj Varshney, Phani Rohitha Kaza, Pulkit Verma, Ravsehaj Singh Puri, Rushang Karia, Savan Doshi, Shailaja Keyur Sampat, Siddhartha Mishra, Sujan Reddy A, Sumanta Patro, Tanay Dixit, and Xudong Shen. 2022b. Super- NaturalInstructions: Generalization via declarative instructions on 1600+ NLP tasks. In Pro- ceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 5085â5109, Abu Dhabi, United Arab Emirates. Association for Computational Linguistics. | 2307.02477#113 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 114 | [124] S. Bills, N. Cammarata, D. Mossing, H. Tillman, L. Gao, G. Goh, I. Sutskever, J. Leike, J. Wu, and W. Saunders, âLanguage models can explain neurons in language models,â URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper/index. html.(Date accessed: 14.05. 2023), 2023.
[125] N. Carlini, F. Tramer, E. Wallace, M. Jagielski, A. Herbert-Voss, K. Lee, A. Roberts, T. B. Brown, D. Song, U. Erlingsson et al., âExtracting training data from large language models.â in USENIX Security Symposium, vol. 6, 2021.
[126] Y. Li, Z. Tan, and Y. Liu, âPrivacy-preserving prompt tuning for large language model services,â arXiv preprint arXiv:2305.06212, 2023. | 2307.02046#114 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 114 | Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia, Ed H. Chi, Quoc V Le, and Denny Zhou. 2022. Chain of thought prompting elicits reasoning in large language models. In Advances in Neural Infor- mation Processing Systems.
Lionel Wong, Gabriel Grand, Alexander K. Lew, Noah D. Goodman, Vikash K. Mansinghka, Ja- cob Andreas, and Joshua B. Tenenbaum. 2023. From word models to world models: Translat- ing from natural language to the probabilistic language of thought.
Frank F. Xu, Uri Alon, Graham Neubig, and Vin- cent Josua Hellendoorn. 2022. A systematic evaluation of large language models of code. In Proceedings of the 6th ACM SIGPLAN Inter- national Symposium on Machine Programming, MAPS 2022, page 1â10, New York, NY, USA. Association for Computing Machinery.
Xiaoyu Yang, Stephen Obadinma, Huasha Zhao, Qiong Zhang, Stan Matwin, and Xiaodan Zhu. 2020. SemEval-2020 task 5: Counterfactual recognition. In Proceedings of the Fourteenth Workshop on Semantic Evaluation, pages 322â 335, Barcelona (online). International Commit- tee for Computational Linguistics. | 2307.02477#114 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 115 | [127] A. J. Nastasi, K. R. Courtright, S. D. Halpern, and G. E. Weissman, âDoes chatgpt provide appropriate and equitable medical advice?: A vignette-based, clinical evaluation across care contexts,â medRxiv, pp. 2023â02, 2023.
[128] H. Zhang, J. Chen, F. Jiang, F. Yu, Z. Chen, J. Li, G. Chen, X. Wu, Z. Zhang, Q. Xiao et al., âHuatuogpt, towards taming language model to be a doctor,â arXiv preprint arXiv:2305.15075, 2023. [129] H. Xiong, S. Wang, Y. Zhu, Z. Zhao, Y. Liu, Q. Wang, and D. Shen, âDoctorglm: Fine-tuning your chinese doctor is not a herculean task,â arXiv preprint arXiv:2304.01097, 2023.
[130] H.-T. Nguyen, âA brief report on lawgpt 1.0: A virtual legal assistant based on gpt-3,â arXiv preprint arXiv:2302.05729, 2023. | 2307.02046#115 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 115 | Shunyu Yao, Dian Yu, Jeffrey Zhao, Izhak Shafran, Thomas L Griffiths, Yuan Cao, and Karthik Narasimhan. 2023. Tree of thoughts: Deliber- ate problem solving with large language models. ArXiv preprint, abs/2305.10601.
Charles Yu, Ryan Sie, Nicolas Tedeschi, and Leon Bergen. 2020. Word frequency does not predict grammatical knowledge in language models. In Proceedings of the 2020 Conference on Empir- ical Methods in Natural Language Processing (EMNLP), pages 4040â4054, Online. Associa- tion for Computational Linguistics.
Wenhao Yu, Meng Jiang, Peter Clark, and Ashish IfQA: A dataset for open- Sabharwal. 2023. domain question answering under counterfactual presuppositions.
Bowen Zhang, Daijun Ding, and Liwen Jing. 2023a. How would stance detection techniques evolve after the launch of ChatGPT?
Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023b. How language model hallucinations can snowball. | 2307.02477#115 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 116 | [131] Q. Huang, M. Tao, Z. An, C. Zhang, C. Jiang, Z. Chen, Z. Wu, and Y. Feng, âLawyer llama technical report,â arXiv preprint arXiv:2305.15062, 2023.
[132] H. Yang, X.-Y. Liu, and C. D. Wang, âFingpt: Open-source financial large language models,â arXiv preprint arXiv:2306.06031, 2023.
[133] W. Jin, H. Mao, Z. Li, H. Jiang, C. Luo, H. Wen, H. Han, H. Lu, Z. Wang, R. Li et al., âAmazon-m2: A multilingual multi-locale shopping session dataset for recommendation and text generation,â arXiv preprint arXiv:2307.09688, 2023.
[134] J. Li, W. Zhang, T. Wang, G. Xiong, A. Lu, and G. Medioni, âGpt4rec: A generative framework for personalized recom- mendation and user interests interpretation,â arXiv preprint arXiv:2304.03879, 2023. | 2307.02046#116 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 116 | Muru Zhang, Ofir Press, William Merrill, Alisa Liu, and Noah A. Smith. 2023b. How language model hallucinations can snowball.
Tianjun Zhang, Yi Zhang, Vibhav Vineet, Neel Joshi, and Xin Wang. 2023c. Controllable text- to-image generation with GPT-4. ArXiv preprint, abs/2305.18583.
Xikun Zhang, Deepak Ramachandran, Ian Tenney, Yanai Elazar, and Dan Roth. 2020. Do language embeddings capture scales? In Proceedings of the Third BlackboxNLP Workshop on Analyz- ing and Interpreting Neural Networks for NLP, pages 292â299, Online. Association for Compu- tational Linguistics.
# A Full Setups
Unless otherwise specified, we use temperature=0 when sampling from the LMs.
# A.1 Arithmetic | 2307.02477#116 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 117 | [135] J. Fu, F. Yuan, Y. Song, Z. Yuan, M. Cheng, S. Cheng, J. Zhang, J. Wang, and Y. Pan, âExploring adapter-based transfer learning for recommender systems: Empirical studies and practical insights,â arXiv preprint arXiv:2305.15036, 2023.
[136] L. Wang, J. Zhang, X. Chen, Y. Lin, R. Song, W. X. Zhao, and J.-R. Wen, âRecagent: A novel simulation paradigm for recommender systems,â arXiv preprint arXiv:2306.02552, 2023. | 2307.02046#117 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 117 | # A Full Setups
Unless otherwise specified, we use temperature=0 when sampling from the LMs.
# A.1 Arithmetic
We randomly sample 1,000 two-digit addition ex- pressions and evaluate them in bases 8, 9, 10, 11, and 16. Each base is sampled separatelyâfor bases other than base-10, we make sure all expressions evaluate to a different result in that base compared to base-10 so that these expressions discriminate between the bases. To ensure the LMs understand these bases, we design the CCC to ask the model what the number following a given number is. We want the model to know when to carry over and when not to, so we take the 100 smallest numbers in the given basis that ends with the maximum digit in that base, and 100 that end with 0.
# A.2 Programming | 2307.02477#117 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 118 | Wenqi Fan is a research assistant professor of the Department of Computing at The Hong Kong Polytechnic University (PolyU). He received his Ph.D. degree from the City University of Hong Kong (CityU) in 2020. From 2018 to 2020, he was a visiting research scholar at Michigan State University (MSU). His research interests are in the broad areas of machine learning and data mining, with a particular focus on Recommender Systems, Graph Neural Networks, and Trust- worthy Recommendations. He has published innovative papers in top-tier journals and conferences such as TKDE, TIST, KDD, WWW, ICDE, NeurIPS, SIGIR, IJCAI, AAAI, RecSys, WSDM, etc. He serves as top-tier conference (senior) program committee members and session chairs (e.g., ICML, ICLR, NeurIPS, KDD, WWW, AAAI, IJCAI, WSDM, etc.), and journal reviewers (e.g., TKDE, TIST, TKDD, TOIS, TAI, etc.). More information about him can be found at https://wenqifan03.github.io. | 2307.02046#118 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 118 | # A.2 Programming
We use the HumanEval dataset (Chen et al., 2021) which has short Python programs and is commonly used to assess the coding ability of LMs (Bai et al., 2022; Xu et al., 2022; Wang et al., 2023b; i.a.). It was designed as a code-generation dataset, where a model writes a function from a specification and is evaluated against test cases with input-output pairs. Different from our other tasks, we follow prior work (Touvron et al., 2023; Wang et al., 2023b) and (1) use temperature 0.1 when evaluating pass@1 and 0.8 for pass@10, (2) sample 50 responses, and (3) only evaluate without 0-shot CoT. While the original work (Chen et al., 2021) recommended sampling 200 responses, this is very expensive, and we follow Wang et al. (2023b) and only sample 50. In Figure 2, we only show the performance on the subset of HumanEval where a 1-based execution of the ground-truth program fails the unit tests. These are the instances that distinguish between 0- and 1-based indexing. We also report results on the full HumanEval dataset in Table 21. | 2307.02477#118 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 119 | Zihuai Zhao is currently a PhD student of the Department of Computing (COMP), Hong Kong Polytechnic University (PolyU), under the supervision of Prof. Qing Li and Dr. Wenqi Fan. Before joining the PolyU, he received both his Masterâs degree (MPhil in Electrical Engineering) and Bachelorâs degree (B.Eng. (Hons) in Electri- cal Engineering) from the University of Sydney in 2023 and 2020, respectively. His research interest covers Recommender Systems, Natural Language Processing, and Deep Reinforcement Learning. He has published innovative works in top-tier journals such as IoT-J. For more information, please visit https://scofizz.github.io/.
il â a> â_ =~" | ( ) Nall | 2307.02046#119 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 119 | We also consider another setupâcode execution, where we give the LM the ground-truth program and ask the LM for the output of the test cases given the input. We remove four programs in HumanEval that are not compatible with this format (ID: 32, 38, 50, and 53), only for this execution task. Because the program would have a different functionality under 1-based indexing, we remove the docstring that is the function description, and also rename the function to the uninformative function, to avoid
confusing the LM. Some programs also become invalid under 1-based indexing, specifically, those that perform any indexing using 0. We remove all test cases that involve indexing with 0 and pro- grams that do not have test cases left after this removal. 150 programs and 969 test cases remain. Some of these test cases may not distinguish be- tween 0- and 1-based indexing. So for our main task (i.e., not CCC), we only consider test cases whose outputs are different under 0- vs. 1-based indexing, and there are 113 of them. | 2307.02477#119 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 120 | il â a> â_ =~" | ( ) Nall
Jiatong Li is currently a PhD student of the Department of Computing (COMP), The Hong Kong Polytechnic University (funded by HKPFS). Before joining the PolyU, he received my Masterâs degree of Information Technology (with Distinc- tion) from the University of Melbourne, under the supervision of Dr. Lea Frermann. In 2021, he got his bachelorâs degree in Information Security from Shanghai Jiao Tong University. His interest lies in Natural Language Processing, Drug Discovery, and Recommender Systems. He has published innovative works in top-tier conferences such as IJCAI and ACL. For more information, please visit https://phenixace.github.io/. | 2307.02046#120 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 120 | Because we use the same prompt to indicate the counterfactual conditions for both code gen- eration and execution, and because we want to maintain comparability with prior work for the former, we only include CCC in the execution setup. We believe they reflect the LMsâ understand- ing of 1-based indexing in the generation setup too. We ask the LM for the output of simple tests about 1-based indexing such as "qrstu"[4] and "qrs"[:2]. They do not require sophisticated rea- soning under the counterfactual conditions and yet are sufficient to discriminate between the default and the counterfactual conditions. We append 5 such checks after each of the 150 programs, total- ing 750 CCC.
For the execution task, we do not consider PaLM- 2, because it only has a maximum of 1,024 output context length and leads to truncated, unparseable results for most test instances, especially under 0- shot CoT.
# A.3 Basic Syntactic Reasoning | 2307.02477#120 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 121 | 17
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Yunqing Liu is currently a PhD student of the Department of Computing (COMP), Hong Kong Polytechnic University (PolyU), under the supervision of Dr. Wenqi Fan. Before joining the PolyU, he received his Masterâs degree in Computer Science from the University of Edin- burgh (M.Sc. in Computer Science), under the supervision of Dr. Elizabeth Polgreen. In 2020, he got his bachelorâs degrees from Wuhan University (B.Sc. in Chemistry and B.Eng. in Computer Science and Technology). His research interest includes Drug Discovery, Graph Neural Networks, and Natural Language Processing. He has published innovative works in top-tier conferences and journals such as IJCAI, EACL, EurJOC and Organic Letters. For more information, please visit https://liuyunqing.github.io/. | 2307.02046#121 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 121 | # A.3 Basic Syntactic Reasoning
We follow Ravfogel et al. (2019) and create syn- thetic variants of English with all six orderings of the subject, verb, and object. Given a dependency tree of a regular English sentence, we alter the or- der of subject and object nodes with respect to the corresponding verb. The subtrees rooted at sub- ject or object nodes are moved as a whole, whereas other non-core dependent nodes (e.g., prepositional phrases) are kept in the original positions. We use 100 sentences from English Penn Treebank (Mar- cus et al., 1993), and convert the original phrase- structure trees into Universal Dependencies (Nivre et al., 2016) using the Stanford converter (Schuster and Manning, 2016).
Our task is to identify the main verb and the main subject of a sentence. We only choose sen- tences where the main subject contains a single word. Ravfogel et al. (2019)âs data generation
procedure sometimes results in sentences in the SVO order to be unnatural English sentences. To eliminate this complexity, we retain only sentences whose SVO variant according to Ravfogel et al. (2019)âs data generation procedure is identical to the original English sentence. | 2307.02477#121 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 122 | Xiaowei Mei received his PhD in Information Systems and Operations Management from the University of Florida. His current research aims to extend standard economic models of information systems in two directions: differentiating various forms of social contagion or peer effects in online and offline networks using empirical methods and big data analytic skills; and designing optimal market mechanisms in information systems using game theory, statistics and simulations methods. His work has been accepted by leading journals
# such as the Journal of Management Information Systems.
Yiqi Wang is an assistant professor at College of Computer, National University of Defense Technology (NUDT). She is currently working on graph neural networks including fundamental algorithms, robustness and their applications. She has published innovative works in top-tier conferences such as ICML, KDD, WWW, EMNLP, WSDM, and AAAI. She serves as top-tier confer- ence program committee members (e.g., WWW, AAAI, IJCAI, CIKM, and WSDM) and journal reviewers (e.g., TIST, TKDD, TKDE and TOIS). She also serves as the leading tutor of tutorials in top-tier conferences (e.g., KDD 2020, AAAI2021, SDM 2021, KDD 2021 and ICAPS 2021). | 2307.02046#122 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 122 | We designed the CCC to assess how well LMs understand the instruction that explains the differ- ence of word orders in the counterfactual settings. We synthetically generate 100 simple three-word sentences (e.g., âanna saw johnâ) in five coun- terfactual word orders (e.g., âanna john sawâ in SOV), and ask LMs to reconstruct the original En- glish sentences in SVO order. Conceptually, this is equivalent to asking the model to identify the subject, verb, and object in the perturbed order, but using a format that is more familiar to the LM. | 2307.02477#122 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 123 | Zhen Wen is a Sr. Applied Science Manager at Amazon Prime Video, leading science efforts in video search, recommendation and promotions. chief scientist of Tencent news feeds product, serving more than 200 million users each day. Dr. Wen directs a team of AI scientists and engineers aiming at deep content understanding, to source and push content users find most relevant and interesting. Prior to his current role, he directed a team of AI scientists and engineers aiming at deep content understanding for short-form video recommendation at Tencent. He also held various science and technology roles at Alibaba Cloud, Google and IBM Research. Dr. Wen received PhD from University of Illinois at Urbana-Champaign. His work received best paper awards at International Conference On Information Systems and ACM Conference on Intelligent User Interfaces. Dr. Wen also received multiple Tencent Outstanding RD Award, IBM Outstanding Innovation Award, IBM Research Accomplishment Award, IBM invention achievement award. Dr. Wen served as an Associate Editor of IEEE Transactions on Multimedia. | 2307.02046#123 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 123 | To generate the simple sentences for the CCC, we designed a simple context-free grammar where the subject and the object are sampled from the vocabulary of person names, and the verb is sam- pled from the set {saw, loves, calls, knows, sees}. A key feature of the sentences generated from this approach is their retained plausibility when the subject and object are interchanged. This means that given a counterfactual sentence (e.g., âanna john sawâ), there are two natural English sentences as candidates for reconstruction (i.e., âanna saw johnâ and âjohn saw annaâ). Due to this inherent ambiguity, LMs cannot default to the heuristic of treating the synthetic sentence as bag- of-words and then reconstructing the most natural ordering of those words in real English. The ran- dom baseline chooses a random noun as the main subject and a random verb as the main verb. | 2307.02477#123 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 124 | Fei Wang is head of personalization science at Amazon Prime Video responsible for improving userâs experience and engagement by devel- oping a deep understanding of our customers and providing relevant, personalized and timely recommendations. Previously, he was a senior director with Visa Research leading a group of AI researchers to work on projects ranging from personalized restaurant recommendations, and fraud reduction, to credit risk prevention. With 50+ patents and 50+ research articles, he is also known for research on financial data mining, mobile healthcare, social computing and multimodal information retrieval. He has received a number of best paper awards from conferences like RecSys, Multimedia Information Retrieval and Computers in Cardiology. | 2307.02046#124 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 124 | A note on CCC results. The results for this task are shown in Table 22. Generally, the models pass our crafted CCC challenge with decent ac- curacy, but we observed that, in a few cases, the LMs are confused by the reconstruction ambiguity explained above. GPT-3.5 and Claude fail in the OVS settings where they often directly copy the original sentenceâe.g., instead of reconstructing âanna saw johnâ to âjohn saw annaâ, they sim- ply copy the original sentence âanna saw johnâ as the output. Similarly, PaLM-2 often incorrectly re- verses the subject and object in the SOV and VSO settingsâe.g., instead of reconstructing âcalls tom lucasâ to âtom calls lucasâ, it outputs âlucas calls tomâ.
# A.4 Natural Language Reasoning with First-Order Logic | 2307.02477#124 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 125 | Xiangyu Zhao is an assistant professor of the school of data science at City University of Hong Kong (CityU). Before CityU, he completed his Ph.D. at Michigan State University. His current re- search interests include data mining and machine learning, especially on Reinforcement Learning and its applications in Information Retrieval. He has published papers in top conferences (e.g., KDD, WWW, AAAI, SIGIR, ICDE, CIKM, ICDM, WSDM, RecSys, ICLR) and journals (e.g., TOIS, SIGKDD, SIGWeb, EPL, APS). His research received ICDMâ21 Best-ranked Papers, Global Top 100 Chinese New Stars in AI, CCF-Tencent Open Fund, Criteo Research Award, and Bytedance Research Award. He serves as top data science conference (senior) program committee members and session chairs (e.g., KDD, AAAI, IJCAI, ICML, ICLR, CIKM), and journal reviewers (e.g., TKDE, TKDD, TOIS, CSUR). He is the organizer of DRL4KDD@KDDâ19, DRL4IR@SIGIRâ20, 2nd | 2307.02046#125 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 125 | # A.4 Natural Language Reasoning with First-Order Logic
We use the FOLIO dataset (Han et al., 2022) that contains premises most of which are consistent with common sense and are hence amenable to our counterfactual study. We use the full dataset, combining the training and development sets for a total of 1,204 instances, for the logistic regression analysis in §5.1. But for our counterfactual study, automatically altering the premises to violate com- mon sense is not trivial, so one author manually rewrote the premises of a subset of 81 instances to be counterfactual, and another author verified the rewrite. Considering the analysis in §5.1, we chose this subset by including every instance with premises all of which GPT-4 believes to be true and whose conclusion whose GPT-4-believed truth value matches the entailment label.
We explicitly instruct the model to use no com- mon sense or world knowledge (§B), thereby re- quiring symbolic reasoning. For the CCC, we ask the model if the unaltered or the altered premise is true, when both are presented as options, and expect the latter.
While the FOLIO dataset has a public release, the authors have made subsequent updates which, at the time of this paper, have not been made public. We hence do not release the LM interaction data for this task, and use a fictional example in Table 5. | 2307.02477#125 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 126 | # A.5 Spatial Reasoning
We ask the LM for the coordinates of objects in a room. We randomly sample 100 rooms, each with 3 different objects placed in 3 different car- dinal directions specified using unit vectors (out of north (0, â1), south (0, 1), east (1, 0), and west (â1, 0) as the default conditions). Though using a downward-facing y-axis as the default condition may be counter-intuitive, it is natural when draw- ing top-to-bottom and is the convention in most im- age processing libraries such as OpenCV (Python), Pillow (Python), and Processing (Java, JavaScript, Python), as well as graphic design applications such as Adobe Illustrator. We believe this system is the most often encountered during LM pretraining. However, other libraries with an upward-facing y- axis also exist, such as matplotlib (Python), ggplot (R), and D3 (JavaScript). | 2307.02477#126 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 127 | Jiliang Tang is a University Foundation Profes- sor in the computer science and engineering department at Michigan State University since 2022. He was an associate professor (2021- 2022) and an assistant professor (2016-2021) in the same department. Before that, he was a research scientist in Yahoo Research and got his PhD from Arizona State University in 2015 under Dr. Huan Liu. His research interests include graph machine learning, trustworthy AI and their applications in education and biology. He was the recipient of various awards including 2022 AIâs 10 to Watch, 2022 IAPR J. K. AGGARWAL Award, 2022 SIAM/IBM Early Career Research Award, 2021 IEEE ICDM Tao Li Award, 2021 IEEE Big Data Security Junior Research Award, 2020 ACM SIGKDD Rising Star Award, 2020 Distinguished Withrow Research Award, 2019 NSF Career Award, and 8 best paper awards (or runner-ups). His dissertation won the 2015 KDD Best Dissertation runner up and Deanâs Dissertation Award. He serves as conference organizers (e.g., KDD, SIGIR, WSDM and SDM) and journal editors (e.g., | 2307.02046#127 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 127 | For the counterfactual setting, we alter the directionâunit vector mapping, and ask for the object coordinates in the new system. We consider two direction-swapped worlds (north-south and east-west), three rotated worlds (by 90°, 180°, and 270°), and a randomly permuted world. We evaluate the relative positions of objects and report the instance-level accuracy that requires all 3 ob- jects in a room to be located correctly as the main metric. The random accuracy is around 16.7%.15 We also report the object-level accuracy in Table 24. As the CCC, we make sure that the LM understands the permuted world by asking it to also specify the coordinates of the unit vectors representing the 4 cardinal directions in the output. | 2307.02477#127 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 128 | Award. He serves as conference organizers (e.g., KDD, SIGIR, WSDM and SDM) and journal editors (e.g., TKDD, TOIS and TKDE). He has published his research in highly ranked journals and top conference proceedings, which have received tens of thousands of citations with h-index 82 (Google Scholar) and extensive media coverage. More details about him can be found at https://www.cse.msu.edu/â¼tangjili/. | 2307.02046#128 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 128 | A.6 Drawing We choose 100 objects from five Emoji16 cate- gories: activity, travel & places, animals & na- ture, food & drink, and objects. Since LMs cannot generate images at the pixel level, we use code as an intermediate abstraction for sketch generation. We do our best to select objects that are easy to draw using code, verified by multiple authors. We consider the Processing language for our experi- ment which supports a variety of shapes and colors and is widely used in visualization. Our initial ex- periments found this language to achieve the best drawing performance compared to other graphics and image processing frameworks, including TikZ, SVG, and matplotlib.
For the counterfactual settings, we ask the LMs to draw the same object, but vertically flipped, or rotated by 90°or 180°. We also ask the LMs to avoid using any transformation functions such as rotate and scale to avoid shortcuts. Before our quantitative evaluation, we flip/rotate back the gen- erated drawing. | 2307.02477#128 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 129 | SPARTANS L / mm la id /
18
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Qing Li received the B.Eng. degree from Hunan University, Changsha, China, and the M.Sc. and Ph.D. degrees from the University of Southern California, Los Angeles, all in computer science. He is currently a Chair Professor (Data Science) and the Head of the Department of Computing, the Hong Kong Polytechnic University. He is a Fellow of IEEE and IET, a member of ACM SIGMOD and IEEE Technical Committee on Data Engineering. His research interests include object modeling, multimedia databases, social media, and recommender systems. He has been actively involved in the research community by serving as an associate editor and reviewer for technical journals, and as an organizer/co-organizer of numerous international conferences. He is the chairperson of the Hong Kong Web Society, and also served/is serving as an executive committee (EXCO) member of IEEE-Hong Kong Computer Chapter and ACM Hong Kong Chapter. In addition, he serves as a councilor of the Database Society of Chinese Computer Federation (CCF), a member of the Big Data Expert Committee of CCF, and is a Steering Committee member of DASFAA, ER, ICWL, UMEDIA, and WISE Society. | 2307.02046#129 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 129 | We use human evaluation by asking human an- notators to determine whether the drawing matches the object. We instruct the annotators to consider orientation as part of correctness and for objects that have a canonical orientation, they must be drawn in that orientation. We average the results over 4 annotators. We also show a breakdown of ac- curacy depending on whether an object has a canon- ical orientation or not, as judged by the annotators, in Table 26. In addition, we consider multi-class classification accuracy using CLIP (Radford et al., 2021) as an automatic metric, where we ask CLIP to classify the drawing into our 100 categories in
15When not considering cases where objects are placed in the same line, there are 24 permutations for placing 3 objects in 4 different directions, of which 4 can be considered correct. 16https://getemoji.com | 2307.02477#129 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 130 | a 0-shot fashion. We include the CLIP multi-class classification accuracy in Table 25. We note that the accuracy of the CLIP model for our setup is not guaranteed: first, our generated sketches may be distributionally different from the predominantly photorealistic images in CLIPâs training data; also, CLIP might be insensitive to the objectâs orienta- tion, but that distinguishes between our default and counterfactual settings. Therefore, to verify the re- liability of this automatic evaluation, we randomly sample 10 objects for each model and for each default/counterfactual setting, and perform human evaluation on the 240 generated images. We find that CLIPâs judgment aligns with human annota- torsâ 84% of the time, suggesting the reliability of this evaluation.
For this task, we do not consider PaLM-2 due to its limited context length. Our preliminary experi- ments also found PaLM-2 to struggle in generating parseable Processing code, even in the default set- ting.
We construct the CCC baseline by requiring the LMs to additionally draw a line at the top of the figure and flip/rotate it as well. A successful flip- ping/rotation of the line, as judged by the annota- tors and verified in the generated code if necessary, demonstrates an understanding of the counterfac- tual world.
# A.7 Music | 2307.02477#130 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 131 | # A.7 Music
# A.7.1 Playing Chords on Instruments
We measure LMsâ abilities to give correct fret placements for ukulele and guitar chords in an ex- isting database.17,18 We include the following kinds of chords from the database: sus2 (suspended sec- ond chord), sus4 (suspended fourth chord), min triad (minor triad), maj triad (major triad), dim7 (di- minished seventh chord), aug7 (augmented seventh chord), maj7 (major seventh chord), min7 (minor seventh chord), dom7 (dominant seventh chord), 5 (fifth interval), and 6 (sixth chord).
In the counterfactual setting, we instruct LMs to provide fret placements for a âspecialâ ukulele or guitar where one of the strings is altered. We experiment with perturbations of different sizes: For guitar, we experiment with one-string changes by one note (EADGBE â EBDGBE; EADGBE â
17https://github.com/tombatossals/chords-db 18We heuristically filter out incorrect datapoints by filtering out chords that either have the wrong number of notes or lack the root note. | 2307.02477#131 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 132 | FADGBE), one-string changes by two notes (â ECDGBE), and two string changes (â ECFGBE). We also experiment with a one-string change that corresponds to a common alternate tuning of a guitar called drop-D tuning (â DADGBE). For ukulele, we experiment with one-string changes by one note (GCEA â FCEA; â ACEA), one-string change by two notes (â BCEA), and two-string changes by two notes (â BEEA). The generated fret placements for a chord are considered correct if all and only the notes in the corresponding chord (e.g., C, E, G for a C major triad) are produced, irrespective of order.
As the CCC, we assess LMsâ understanding of the given instrumentâs strings by asking them to identify what notes a given sequence of frets corre- sponds to; for the CCC, the sequences are either all fret 0, all fret 1, or all fret 2. We compute CCC ac- curacy at the fret level (as opposed to the sequence level). | 2307.02477#132 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 133 | A.7.2 Retrieving Notes of Famous Melodies For 8 famous melodies, we prompt LMs to retrieve the n-th note in the melody, where n is between 1 and 7 (inclusive). In the counterfactual setting, we prompt the LM to do the same but in a different key. The list of melodies and keys we experiment with is below.
We use C Major as the key for songs as the default condition given its popularity for famous melodies like childrenâs songs. We use other keys as the counterfactual keys.19
As the CCC, we assess LMsâ understanding of the given keys by asking them to retrieve the n-th note of the scale of the given key.
Melodies: Twinkle Twinkle Little Star, Mary Had a Little Lamb, Happy Birthday to You, Some- where Over the Rainbow, Row Row Row Your Boat, Old Macdonald Had a Farm, Itsy Bitsy Spi- der, London Bridge is Falling Down.
Counterfactual Keys: B# major, C# major, Db major, D major, D# major, Eb major, Fb major, E major, E# major, F major, F# major, Gb major, G | 2307.02477#133 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 134 | 19We note that some songs may have multiple canonical keys (e.g., âTwinkle Twinkle Little Starâ is also frequently performed in keys like G major or D Major.) In some initial exploration, we validated that C Major was at least one of the canonical keys for the melodies chosen, both by verifying that popular sheet music for these songs was written in C Major, and by asking GPT-3.5 to generate the melodies in an unspecified key and verifying that the generated key was C Major.
major, G# major, Ab major, A major, A# major, Bb major, Cb major, B major.
# A.8 Chess
We evaluate an LMâs ability to understand chess rules by checking if it can determine whether a 4-move opening follows the rules of chess or not. In the counterfactual setting, we swap the position of bishops and knights on the board and evaluate the same task. For each setting, we randomly sam- ple 400 unique chess openings via a procedural generation algorithm: 200 are legal for the default setting but not for the counterfactual setting, and vice versa for the other 200, ensuring a more bal- anced and fair classification problem. We represent the moves as the LM input using the PGN format, the standard for chess moves description. | 2307.02477#134 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 135 | For the CCC, we ask an LM for the starting po- sitions of the four knights and four bishops on the board to make sure it understands the new initial board. For both the default and counterfactual set- tings, we ask for the positions of white knights, white bishops, black knights, and black bishops, to- taling 8 pieces, and evaluate using accuracy. Since concluding the effectiveness of our counterfactual prompt using merely 8 CCC may not be statistically significant, we sample 15 LM responses using tem- perature=0.1 for asking about each piece.
# A.9 SET Game
We synthetically generate SET boards, consisting of 12 cards, each with exactly one 3-card SET that satisfies the game rules in §3.9. We repre- sent each card with a string representation, e.g., (3|open|red|diamond). In preliminary experi- ments, we tried to ask the LMs to find the SET directly, but found that they cannot perform this task well (see Figure 4c, âNumber of Cards to Findâ= 3). Therefore, in our main evaluation, we expose 2 cards in the SET and ask the LM to iden- tify the missing one that completes the SET. | 2307.02477#135 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 136 | In the counterfactual setting, we invert the rule for the number attribute to require that two cards in the SET should have the same number but the other card should be different. For the CCC, we ask the model to verify the validity of a given SET instead of finding it. In each CCC instance, we either give a valid SET from the board, or 3 randomly sampled cards that do not constitute a valid SET. We ask the model to classify whether the given combination is valid or invalid. We note that our counterfac- tual perturbation ensures that the each SET cannot
be simultaneously valid in the default setting and the counterfactual setting, and hence this CCC is discriminative between the two settings.
# B Prompts
We provide the exact prompts that we used to query the LMs in Tables 1 to 17. For clarity, we give a concrete prompt that embeds a test instance, rather than a template. We explain minor design decisions in the respective captions. We do not use the system message field for any model.
# C Raw Results
We show the numeric results in Tables 18 to 34.
# Mode
# Mode
# Prompt
# Test
# Test
You are a mathematician. Assuming that all numbers are in base-11 where the digits are "0123456789A", what is 59+37? {Letâs think step by step, and }end the response with the result in "oxed{result}". | 2307.02477#136 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 137 | # CCC
You are a mathematician. Assuming that all numbers are in base-11 where the digits are "0123456789A", what is the next number after 11A? Do this by counting the few preceding numbers and completing the sequence. End the response with the result.
You are a mathematician. Assuming that all numbers are in base-11 where the digits are "0123456789A", what is 25+68? Letâs think step by step, and end the response with the result in "oxed{result}". We add the ones digits first. In base-11, 5+8=12. So the ones digit of the final sum is 2. We need to carry over the 1 to the tens place. Then we add the tens digits. In base-11, 2+6=8. Since we carried over the 1, 8+1=9. So the tens digit of the final sum is 9. Putting the digits of the final sum together, we get oxed{92}. ...[optionally more demonstrations in the same format]... You are a mathematician. Assuming that all numbers are in base-11 where the digits are "0123456789A", what is 59+37? Letâs think step by step, and end the response with the result in "oxed{result}".
# Few-Shot CoT | 2307.02477#137 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 138 | # Few-Shot CoT
Table 1: Prompts for the arithmetic task. {Letâs think step by step, and } is added only if 0-shot CoT is used (and the following e is capitalized without 0-shot CoT). We use the oxed{result} syntax to wrap results because we found in preliminary experiments that the models tend to use this format even without this specification. The Few-Shot CoT prompt is used for the analysis in §5.5.
# Mode
# Prompt | 2307.02477#138 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02477 | 139 | # Mode
# Prompt
You are an expert programmer. What does the following code snippet in Python 3.7 print? ```python def function(lst): return sum([lst[i] for i in range(1, len(lst), 2) if lst[i] % 2 == 0]) print([function([4, 88])]) print([function([4, 5, 6, 7, 2, 122])]) print([function([4, 0, 6, 7])]) print([function([4, 4, 6, 8])]) print([list(range(3))]) print([[4, 5, 6].pop(2)]) print(["qrs"[:2]]) print(["qrstu"[4]]) print([list(enumerate("qrstuv"))]) ``` {Letâs think step by step. Write out intermediate results and reasoning processes as needed. }End the response by saying "The final output is:" and a unified summary ```python``` code block with *ALL* the output, in which each line represents the output of each print statement.
Default | 2307.02477#139 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.