doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.02046 | 45 | To facilitate the performance of LLMs for RecSys, prompting techniques like In-context Learning (ICL) and Chain-of-Thought (CoT) are increasingly investigated to manually design prompts for various recommendation tasks. In addition, prompt tuning serves as an additive technique of prompting, by adding prompt tokens to LLMs and then updating them based on task-specific recommendation datasets. More recently, instruction tuning that combines the pre-training & fine-tuning paradigm with prompting [87] is explored to fine-tune LLMs over multiple recommendation tasks with instruction-based prompts, which enhances the zero-shot performance of LLMs on unseen recommendation tasks. Figure 5 compares the representative methods corresponding to each of the aforementioned three prompting techniques of LLMs, in terms of the input formation and parameter update of LLMs (i.e., either tunable or frozen). In this section, we will discuss the prompting, prompt tuning, and instruction tuning techniques in detail, for improving the performance of LLMs on recommendation tasks. In summary, Table 3 categorizes the existing works according to the aforementioned three techniques, including the specific recommendation tasks and the LLM backbones considered in these works.
# 5.1 Prompting | 2307.02046#45 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 45 | Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, Vinay Ramasesh, Vinay Uday Prabhu, Vishakh Padmakumar, Vivek Srikumar, William Fedus, William Saunders, William Zhang, Wout Vossen, Xiang Ren, Xiaoyu Tong, Xinran Zhao, Xinyi Wu, Xudong Shen, Yadollah Yaghoobzadeh, Yair Lakretz, Yangqiu Song, Yasaman Bahri, Yejin Choi, Yichi Yang, Yiding Hao, Yifu Chen, Yonatan Belinkov, Yu Hou, Yufang Hou, Yuntao Bai, Zachary Seid, Zhuoye Zhao, Zijian Wang, Zijie J. Wang, Zirui Wang, and Ziyi Wu. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2022. | 2307.02053#45 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 45 | # 0-Shot Chain-of-Thought Prompting
Consistent with prior findings (Chen et al., 2022; Dasgupta et al., 2022; i.a.), we generally observe 0-shot CoT to be helpful for most cases. There are, however, exceptions. For example, 0-shot CoT sub- stantially hurts PaLM-2âs addition performance in base-10 and 16, and consistently degrades GPT-4 and GPT-3.5âs chord-playing performance for the default tuning. This may be due to a model prag- matically inferring that a task is more difficult than it actually is when explicitly asked to âthink step by stepâ, and this âoverthinkingâ on simple tasks could lead to mistakes (Kojima et al., 2023). It is also possible that these are due to memorization: the model could have memorized the specific input- output mapping of a task, without understanding how to derive the output from the input, and when explicitly instructed to spell out that process, it makes more errors (Zhang et al., 2023b).
# 5.5 Few-shot Demonstrations | 2307.02477#45 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 45 | [42] M. Shridhar, J. Thomason, D. Gordon, Y. Bisk, W. Han, R. Mottaghi, L. Zettlemoyer, and D. Fox. Alfred: A benchmark for interpreting grounded instructions for everyday tasks. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pages 10740â10749, 2020.
[43] T. Shu and Y. Tian. M3rl: Mind-aware multi-agent management reinforcement learning. arXiv
preprint arXiv:1810.00147, 2018.
[44] C. H. Song, J. Wu, C. Washington, B. M. Sadler, W.-L. Chao, and Y. Su. Llm-planner: Few- shot grounded planning for embodied agents with large language models. arXiv preprint arXiv:2212.04088, 2022.
[45] J. Suarez, Y. Du, P. Isola, and I. Mordatch. Neural mmo: A massively multiagent game environment for training and evaluating intelligent agents. arXiv preprint arXiv:1903.00784, 2019.
[46] Y. Wang, J. Xu, Y. Wang, et al. Tom2c: Target-oriented multi-agent communication and cooperation with theory of mind. In International Conference on Learning Representations, 2021. | 2307.02485#45 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 46 | # 5.1 Prompting
The key idea of prompting is to keep LLMs frozen (i.e., no parameters updates), and adapt LLMs to downstream tasks via task-specific prompts. To recap the development of prompting strategies for adapting LLMs to downstream tasks, early-stage conventional prompting methods mainly target at unifying downstream tasks to language generation manners, such as text summarization, relation extraction, and sentiment analysis. Later on, ICL [15] emerges as a powerful prompting strategy that allows LLMs to learn new tasks (i.e., tasks with knowledge demanding objectives) based on contextual information. In addition, another up- to-date prompting strategy named CoT [50] serves as a particularly effective method for prompting LLMs to address downstream tasks with complex reasoning.
# 5.1.1 Conventional Prompting | 2307.02046#46 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 46 | # 5.5 Few-shot Demonstrations
We study if additional demonstration examples us- ing in-context learning (Brown et al., 2020) bridges the default-counterfactual gap. For the arithmetic task, we construct few-shot CoT prompts (Nye et al., 2021; Wei et al., 2022) and prepend up to 16 samples. As shown in Figure 6, while the gap is reduced, it is still substantial for bases 9, 11, and 16.
Moreover, the accuracy improvement with more demonstrations plateaus towards 16-shot, suggest- ing that the default-counterfactual gap is unlikely to be eliminated by simply adding more demonstra- tions (at least for arithmetic).10
# 5.6 Qualitative Analysis of Drawing Results | 2307.02477#46 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 46 | [47] Z. Wang, S. Cai, A. Liu, X. Ma, and Y. Liang. Describe, explain, plan and select: Interactive planning with large language models enables open-world multi-task agents, 2023.
12
[48] J. Wei, X. Wang, D. Schuurmans, M. Bosma, E. Chi, Q. Le, and D. Zhou. Chain of thought prompting elicits reasoning in large language models. arXiv preprint arXiv:2201.11903, 2022.
[49] Y. Wu, S. Y. Min, Y. Bisk, R. Salakhutdinov, A. Azaria, Y. Li, T. Mitchell, and S. Prabhumoye. Plan, eliminate, and trackâlanguage models are good teachers for embodied agents. arXiv preprint arXiv:2305.02412, 2023.
[50] F. Xia, A. R. Zamir, Z. He, A. Sax, J. Malik, and S. Savarese. Gibson env: Real-world perception for embodied agents. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 9068â9079, 2018. | 2307.02485#46 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 47 | # 5.1.1 Conventional Prompting
There are two major approaches for prompting pre-trained language models to improve the performance on specific downstream tasks. One approach is prompt engineering, which generates prompt by emulating text that language models encountered during pre-training (e.g., text in NLP tasks). This allows pre-trained language models to unify downstream tasks with unseen objectives into language generation tasks with known objectives. For instance, Liu et al. [39] consider prompting ChatGPT to format the review summary task in recommendations into text summarization, with a prompt including âWrite a short sentence to summarize .â. Another approach is few-shot prompting, where a few input-output examples (i.e., shots) are provided to prompt and guide pre-trained language models to generate desired output for specific downstream tasks.
Due to the huge gap between language generation tasks (i.e., the pre-training objectives of LLMs) and down- stream recommendation tasks, these conventional prompting methods have only shown limited applications in specific recommendation tasks that have similar nature to language generation tasks, such as the review summary of users [39] and the relation labeling between items [4].
# 5.1.2 In-Context Learning (ICL) | 2307.02046#47 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 47 | # 5.6 Qualitative Analysis of Drawing Results
We conduct a qualitative error analysis on the draw- ing task and show some examples in Figure 7. We first note that GPT-4 successfully passes the CCC for these cases (see §3.6; but not displayed here), indicating that it understands the flip/rotation in- structions. However, the objects in the counterfac- tual worlds are often not flipped or rotated. Even when they are transformed appropriately, the result- ing drawing is often simplified or of worse quality (e.g., Unicorn, Cake). We also observed much more syntactically invalid programs in the counter- factual cases for GPT-3.5.11 These results indicate that even when a model can perform a task in the counterfactual setup, its capabilities are reduced.
# 6 Discussion | 2307.02477#47 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 47 | [51] F. Xiang, Y. Qin, K. Mo, Y. Xia, H. Zhu, F. Liu, M. Liu, H. Jiang, Y. Yuan, H. Wang, et al. Sapien: A simulated part-based interactive environment. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 11097â11107, 2020.
[52] S. Yang, O. Nachum, Y. Du, J. Wei, P. Abbeel, and D. Schuurmans. Foundation models for decision making: Problems, methods, and opportunities. arXiv preprint arXiv:2303.04129, 2023.
[53] S. Yuan, J. Chen, Z. Fu, X. Ge, S. Shah, C. R. Jankowski, D. Yang, and Y. Xiao. Distilling script knowledge from large language models for constrained language planning. arXiv preprint arXiv:2305.05252, 2023.
[54] Y. Zhu, D. Gordon, E. Kolve, D. Fox, L. Fei-Fei, A. Gupta, R. Mottaghi, and A. Farhadi. Visual semantic planning using deep successor representations. In Proceedings of the IEEE international conference on computer vision, pages 483â492, 2017.
# A Additional Details on Framework | 2307.02485#47 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 48 | # 5.1.2 In-Context Learning (ICL)
Alongside the introduction of GPT-3 [15], ICL is proposed as an advanced prompting strategy, which significantly boosts the performance of LLMs on adapting to many downstream tasks. Gao et al. [86] attribute the success of ICL in prompting LLMs for downstream tasks to two designs: prompt and in-context demonstrations. In other words, the key innovation of ICL is to elicit the in-context ability of LLMs for learning (new or unseen) downstream tasks from context during the inference stage. In particular, two settings proposed in ICL are prevalently leveraged for prompting LLMs for RecSys. One is the few-shot setting, in which a few demonstrations with contexts and desired completions of the specific downstream tasks are provided along with prompts. The other is the zero-shot setting, where no demonstrations will be given to LLMs but only natural language descriptions of the specific downstream tasks are appended to the prompt. As shown in Figure 6, two brief templates of few-shot ICL and zero-shot ICL for recommendation tasks are provided, respectively.
8
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023 | 2307.02046#48 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 48 | Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernan- dez, Jackson Kernion, Kamal Ndousse, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, and Jared Kaplan. A general language assistant as a laboratory for alignment, 2021.
Lianmin Zheng, Wei-Lin Chiang, Ying Sheng, Siyuan Zhuang, Zhanghao Wu, Yonghao Zhuang, Zi Lin, Zhuohan Li, Dacheng Li, Eric. P Xing, Hao Zhang, Joseph E. Gonzalez, and Ion Stoica. Judging llm-as-a-judge with mt-bench and chatbot arena, 2023.
Kai Lv, Yuqing Yang, Tengxiao Liu, Qinghui Gao, Qipeng Guo, and Xipeng Qiu. Full parameter fine-tuning for large language models with limited resources, 2023.
10 | 2307.02053#48 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 48 | # 6 Discussion
Do humans also perform worse with unfamil- iar counterfactual conditions? It is possible that humans may have lower performance under the counterfactual conditions with a fixed time bud- get, but not necessarily when given ample time to reason and revise. Analogous to the classic compe- tence/performance distinction in linguistics (Chom- sky, 1965, §1.1), we hypothesize that humans have the competence to generalize to new task condi- tions, even though it may sometimes require suffi- cient execution budget to realize it as robust perfor- mance.12 In fact, there is increasing evidence from cognitive science that human reasoning is scaf- folded by rich causal models of the world (Pearl, 1988; Lake et al., 2017; Ullman and Tenenbaum, 2020; Wong et al., 2023), and that humans can inter- vene on these models to perform rapid and flexible counterfactual simulations (Lagnado et al., 2013; Gerstenberg et al., 2017, 2021). However, stepping
10An interesting pattern is that bases 11 and 16 suffer from 1-shot demonstration than 0-shot. We hypothesize that this may be due to these being the two bases with letter digits. | 2307.02477#48 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 48 | # A Additional Details on Framework
# A.1 Observation Module
To deal with complex observations, a well-constructed observation module is needed for embodied agents to extract useful information.
For ego-centric RGBD image observations, the observation module ï¬rst maps the RGBD images into 3d point clouds. From 3d point clouds, we can extract the positions and relationships of the objects with the help of instance segmentation masks. After that, extracted environment information will be sent to the belief module.
# A.2 Belief Module
The belief module helps the agent remember seen objectsâ location, the progress of current tasks, and generate the map for navigation, etc. In ï¬gure 7, we show an example navigation map used in the TDW-MAT environment for navigation and exploration.
# A.3 Communication Module
Itâs important for cooperative embodied agents to be able to communicate effectively with others. Effective communication needs to solve two problems: what to send and when to send.
We deal with the what to send problem in this module by directly using the LLMs as a Message Generator with designed prompts, constructed from the components of Instruction Head, Goal Description, States Description, Action History, and Dialogue History. To better constrain LLMsâ generated messages, we also add a note at the end of the prompt and append two seed messages at the beginning of the Dialogue History to elicit deserved effective communication behavior. The detailed prompt design is shown below: | 2307.02485#48 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 49 | 8
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Prompting <Task description> <Prompt> <Demonstrations> . SS - - wer (In-context learning) = + | + [3S =] Input-output examples. â> i) 3 = BMI based on task-specific dataset <Original input> <Prompt tokens> + ty | ty | | th (Prompt tuning) Discrete ontinuous A set of <Instruction> for multiple tasks (Instruction tuning) & 2 ve ID) = LLMs | â> Output se) . : tunable > 0 LLMs âOutputâ> Grounding © . rN outpul @: frozen seve Updates âOutputâ> Loss Grounding output
Figure 5: An illustration of three representative methods of prompting LLMs: in-context learning (top) which requires no parameter update of LLMs, prompt tuning (middle) which adds new prompt tokens to LLMs and optimizes the prompt along with minimal parameter updates at the input layer of LLMs, and instruction tuning (bottom) which fine-tunes LLMs over multiple tasks-specific prompts, also known as instructions. | 2307.02046#49 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 49 | 10An interesting pattern is that bases 11 and 16 suffer from 1-shot demonstration than 0-shot. We hypothesize that this may be due to these being the two bases with letter digits.
11On average, the number of parseable programs generated by GPT-3.5 drops from 99% in the default condition to 62%, 71%, and 75% for the vertically flipped, 90° rotated, and 180° rotated settings, respectively.
12It is arguable if our evaluation setting provides sufficient execution budget (Lampinen, 2023). Our in-context learning experiment (§5.5) may be thought of as increasing this budget, and yet the default-counterfactual gap is still sizeable there.
Penguin House Cake Unicorn = = Aw, Default Vflip R90 R180
Figure 7: Visualizations of objects drawn by GPT-4 under the default (upright) and counterfactual condi- tions: vertical flip (Vflip), rotates 90 degrees (R90), and 180 degrees (R180). In all cases, the CCC (not shown) passes. We show the original output, without flipping/rotating back as in our quantitative evaluation (§A.6). For the counterfactual settings, GPT-4 either does not transform the objects as instructed (e.g., house and penguin) or struggles to draw meaningful objects (e.g., cake and unicorn). | 2307.02477#49 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 50 | ⢠Prompting LLMs for RecSys via Few-shot ICL. A straightforward approach for prompting LLMs to down- stream recommendation tasks is to teach LLMs how to act as RecSys. For instance, Liu et al. [39] employ ChatGPT and propose separate task descriptions tailored to different recommendation tasks, including top-K recommendation, rating prediction, and explanation gen- eration, to perform few-shot ICL based on corresponding input-output examples of each recommendation task. For instance, the user rating history is given as an example for rating prediction tasks. Similarly, other existing works propose their distinct insights into designing the in-context demonstrations for better recommendation performance. For example, a text description of role injection, such as âYou are a book rating expert.â, is proposed in [58] to augment the in- context demonstrations, which prevents LLMs from refusing to complete the recommendation tasks (e.g., LLMs sometimes respond with âAs a language model, I donât have the ability to recommend ...â for recommendation tasks). Apart from teaching LLMs to directly act as RecSys, few-shot ICL is also leveraged | 2307.02046#50 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 50 | back, replicating or modeling human intelligence need not be a main goal of LMs in the first place, and human behavior is largely orthogonal to the desiderata we set for these models.
Is task-specific reasoning bad? It is not nec- essarily bad when solving familiar tasks, but an ideal system should also possess general reason- ing abilities that, when prompted, can be used to generalize to novel situations. Our point is that memorization is an often-overlooked confounding factor in interpreting LMsâ reasoning abilities.
Why do we care about counterfactual worlds? Wouldnât a model for only the default task in- stantiation be nonetheless useful? It is certainly true that such a model would still be useful. How- ever, many of the counterfactual worlds that we in- vestigate actually are not very distant so that model performance under them still bears utility. For ex- ample, addition in different bases is certainly use- ful for many applications. More generally, we are necessarily interested in the counterfactual tasks themselves; we are only interested in them inso- far as performance on these tasks can serve as a measurable proxy for the generalizability of these models and their underlying reasoning capabilities. | 2307.02477#50 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 50 | Figure 7: A visualization of the navigation map in TDW-MAT environment. The goal location is shown in red, target objects are in blue, containers are in green, the agent is denoted with cyan, and the other agentâs position in memory is denoted in yellow.
Goal Description For each task, the goal description is converted from T = {G1, G2, ..., Gm} using a formal template.
State Description For each step, the state description is converted from task progress PT , ego-state PE, others-state PO, and scene memory PS given by the belief module.
Action History The concatenation of the last K actions (high-level plans) the agent took.
Dialogue History The Concatenation of the last D dialogues between agents including the mes- sages the agent itself has sent.
To constrain the message generation of the LLMs, we add a note at the end of the prompt:
Note: The generated message should be accurate, helpful, and brief. Do not generate repetitive messages.
And append two seed messages at the beginning of the Dialogue History to elicit deserved effective communication behavior:
Alice: "Hi, Iâll let you know if I ï¬nd any goal objects, ï¬nish any subgoals, and ask for your help when necessary.â | 2307.02485#50 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 51 | have the ability to recommend ...â for recommendation tasks). Apart from teaching LLMs to directly act as RecSys, few-shot ICL is also leveraged to guide LLMs to call traditional RecSys or external domain tools for recommendations. For example, a framework named Chat-Rec [3] is proposed to bridge ChatGPT and tradi- tional RecSys via few-shot ICL, where ChatGPT learns to receive candidate items from traditional RecSys and then refines the final recommendation results. Whatâs more, Zhang [101] designs a textual API call template for external graph reasoning tools and successfully teaches ChatGPT to use those templates through few- shot ICL to access the graph-based recommendation results generated by the external tools. | 2307.02046#51 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 51 | Arenât the observed trends trivial? The de- fault task variant is likely the most frequent dur- ing pretraining, so of course an LM performs better under it. Indeed, our results parallel the classic train-test gap in machine learning. How- ever, an ideal learner with the right inductive biases should be able to structure their internal parameters and representations to implement general-purpose abstractions (e.g., the concept of addition), and use these abstractions to generalize to counterfactual conditions, analogous to physicists using mathe- matical abstractions to make predictions about uni- verses that are substantially different from our own, or more generally to humans who can generalize to new stimuli in cognitive science studies (Lagnado et al., 2013; Gerstenberg et al., 2017, 2021). Our study indicates that LMs trained on large text cor- pora, remarkable as they may be, are still quite susceptible to overfitting with frequency effects. | 2307.02477#51 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 51 | Bob: "Thanks! Iâll let you know if I ï¬nd any goal objects, ï¬nish any subgoals, and ask for your help when necessary.â
# A.4 Reasoning Module
With all the information gathered and provided by previous modules, cooperative embodied agents need to synthesize and reason over the current state, the belief of the others and the scene, the goals, the actions Iâve taken, and messages Iâve received to come up with a plan of what to do next. A strong reasoning module is required to leverage all the information effectively.
While designing such a module from scratch is nearly infeasible, we utilize powerful LLMs directly as the Reasoning Module with designed prompts similar to the Communication Module to reason over all the information and generate a high-level plan.
We utilize the same LLMs with slightly modiï¬ed prompts to serve as the Reasoning Module to decide on a high-level plan given all the information gathered from the previous modules, leveraging its strong complex reasoning capability as observed in [6]. Speciï¬cally, we modify the Instruction Head and
14
compile an Action List of all available actions for the LLMs to make the choice, which formalization makes it easier for the LLM to make an executable plan without any few-shot demonstrations. | 2307.02485#51 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 52 | ⢠Prompting LLMs for RecSys via Zero-shot ICL. Many existing works consider both few-shot ICL and zero- shot ICL settings at the same time to compare their performance under the same recommendation tasks. Typically, few-shot ICL can outperform zero-shot ICL
Few-shot ICL (with demonstrations)
Task description Demonstrations Your task is to recommend anew item basedon__. Prompt User-item interactions: [1] <item title 1> [2] <item title 2> [3] <item title 3> Recommend item: Below are some examples. User-item interactions: [1] <item title a> [2] <item title b> Recommend item: <item title c>
Zero-shot ICL (without demonstration)
Figure 6: Brief templates of few-shot ICL and zero-shot ICL for recommendation tasks. | 2307.02046#52 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 52 | Can some more carefully designed prompts eliminate the default-counterfactual gap? This is always a possibility, and one that we can never tractably rule out. Nevertheless, given the consistent patterns across our tasks, we believe that a prompt that completely bridges the default- counterfactual gap is unlikely. Our in-context learn- ing experiment (§5.5) shows that this gap could be reduced, but not fully removed, by more infor- mative prompts. It would be interesting to apply more advanced prompting techniques (Wang et al., 2023a, 2022a; Yao et al., 2023; Sordoni et al., 2023; i.a.) to our counterfactual tasks. We considered 0-shot chain-of-thought in this work, which did not fully bridge the default-counterfactual gap, but we leave the exploration of these more recent prompt- ing techniques to future work.
# 7 Limitations
Despite our attempt to devise novel counterfactual conditions to gauge an LMâs âtrueâ reasoning abil- ity, it may not be precisely reflected by the counter- factual performance due to several factors.
# 7.1 Underestimation | 2307.02477#52 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 52 | compile an Action List of all available actions for the LLMs to make the choice, which formalization makes it easier for the LLM to make an executable plan without any few-shot demonstrations.
We also use the zero-shot chain-of-thought prompting technique introduced by [22] to encourage the LLM to carry out more reasoning before giving the ï¬nal answer.
Action List We compile all available actions into an Action List for the LLMs to select from. The multi-choice formalization makes it easier for the LLM to make an executable action without any few-shot demonstrations.
Answer Extraction As shown in [48], chain-of-thought prompting can unleash the strong reasoning ability of the LLMs, we use the zero-shot chain-of-thought prompting technique introduced by [22] to encourage the LLM to carry out more reasoning before giving the ï¬nal answer.
# B Additional Details on Environments
# B.1 Communicative Watch-And-Help
Communicative Watch-And-Help(C-WAH) is an extension of the Watch-And-Help challenge[35], which enable agents to send messages to each other. Sending messages, alongside other actions, takes one timestep and has an upper limit on message length. | 2307.02485#52 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 53 | Zero-shot ICL (without demonstration)
Figure 6: Brief templates of few-shot ICL and zero-shot ICL for recommendation tasks.
since additional in-context demonstrations are provided to LLMs. Despite the reduction in performance, zero- shot ICL entirely relieves the requirement of task- specific recommendation datasets to form in-context demonstrations and can be suitable for certain tasks like conversational recommendations, where users are not likely to provide any demonstration to LLMs. For example, Wang et al. [92] prompt ChatGPT for conversational recommendations with a zero-shot ICL template containing two parts: a text description of conversational recommendation tasks (e.g., âRecommend items based on user queries in the dialogue.â), and a format guideline in natural languages, such as âThe output format should be â¨no.â© â¨item titleâ©.â, making the recommendation results easier to parse.
9
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Table 3: An organization of representative methods of prompting LLMs for RecSys in terms of three paradigms: prompting, prompt tuning, and instruction tuning. We subsequently categorize existing works corresponding to each paradigm, including the specific recommendation tasks and the LLM backbones considered in these works. | 2307.02046#53 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 53 | # 7.1 Underestimation
For our main evaluations, we aim to construct coun- terfactual tasks that have the same difficulty as the default variants so that task difficulty does not confound our comparisons. This is not always possibleâin fact, an objective difficulty measure may not even exist. One could, for example, ar- gue that base-11 addition is harder than base-10
because it requires reasoning with one additional digit, or base-9 is harder than base-10 because on average the sums would consist of more digits. | 2307.02477#53 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 53 | Task Name Predicate Set Prepare afternoon tea ON(cupcake,coffeetable), ON(pudding,coffeetable), ON(apple,coffeetable), ON(juice,coffeetable), ON(wine,coffeetable) Wash dishes IN(plate,dishwasher), IN(fork,dishwasher) Prepare a meal ON(coffeepot,dinnertable),ON(cupcake,dinnertable), ON(pancake,dinnertable), ON(poundcake,dinnertable), ON(pudding,dinnertable), ON(apple,dinnertable), ON(juice,dinnertable), ON(wine,dinnertable) Put groceries IN(cupcake,fridge), IN(pancake,fridge), IN(poundcake,fridge), IN(pudding,fridge), IN(apple,fridge), IN(juice,fridge), IN(wine,fridge)
# Set up a dinner table ON(plate,dinnertable), ON(fork,dinnertable)
Table 2: Task description in C-WAH. There are 5 types of tasks and each of them contains a few predicates. | 2307.02485#53 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 54 | Paradigms Methods Recommendation Tasks LLM Backbones References Prompting Conventional Prompting In-Context Learning (ICL) Chain-of-Thought (CoT) Hard Prompt Tuning Review Summary Relation Labeling Top-K Recommendation Rating Prediction Conversational Recommendation Explanation Generation Top-K Recommendation ChatGPT ChatGPT ChatGPT GPT-3 T5 PaLM ChatGPT ChatGLM ChatGPT ChatGPT T5 [39] [4] [3], [39], [88]1, [89]2, [90]3, [91]4, [92]5, [93]6, [94] [95]7, [96] [97], [98]8 [99], [100] [3], [58], [88]1, [101]9 [102]10 [3], [92]5, [93]6, [103] [3], [39] [20] (Refer to ICL above, see Section 5.2.1 for explanations) Prompt Tuning Instruction Tuning Soft Prompt Tuning Full-model Tuning with Prompt Parameter-efficient Model Tuning with Prompt Top-K Recommendation Top-K Recommendation Rating Prediction Top-K Recommendation Rating Prediction T5 PaLM M6 T5 LLaMA T5 | 2307.02046#54 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 54 | because it requires reasoning with one additional digit, or base-9 is harder than base-10 because on average the sums would consist of more digits.
Retrieving notes in melodies in different keys faces a similar issue: one way of retrieving a note in a melody in an uncommon key would be to first retrieve it in a canonical key and then transpose it to the desired key. With this strategy, the counterfac- tual task consists of 2 steps and is harder than (and requires first) completing the 1-step original task. This strategy is not the only way to solve the task: an alternate one would be to recall a melody as a series of abstract relations in a scale and directly map them onto notes in a target key. However, the 2-step process is a natural one that is often em- ployed by musicians. The counterfactual setup thus introduces a confounder: low performance may be driven by the increased difficulty of the counterfac- tual task, rather than overfitting to melodies in their canonical keys, if models are employing two-step strategy. However, since both strategies are avail- able to models and we do not prompt them to use a particular one, reliance on this two-step strategy may itself be indicative of overfitting to the original canonical keys.
# 7.2 Overestimation | 2307.02477#54 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 54 | Table 2: Task description in C-WAH. There are 5 types of tasks and each of them contains a few predicates.
Tasks Five types of tasks are available in C-WAH, named Prepare afternoon tea, Wash dishes, Prepare a meal, Put groceries, and Set up a dinner table. These tasks include a range of housework, and each task contains a few subgoals, which are described by predicates. A predicate is in "ON/IN(x, y)" format, that is, "Put x ON/IN y". The detailed descriptions of tasks are listed in Table 2.
The task goal is to satisfy all the given subgoals within 250 time steps, and the number of subgoals in each task ranges from 3 to 5.
Observation Space C-WAH has two observation modes, named Symbolic Observation and Visual Observation. For Symbolic Observation, we followed the setting of the original Watch-And-Help challenge, one agent can receive all the object information in the same room as the agent, and the information includes location, status, name, relationship, etc.
For Visual Observation, agents can receive the egocentric RGB image and depth image, as well as some auxiliary observations. The detailed observation space is listed here:
15
⢠RGB image: the egocentric image comes from the camera facing forward, with screen size 256 à 512 and ï¬eld of view 60; | 2307.02485#54 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02477 | 55 | # 7.2 Overestimation
We can never be certain of how rarely particular counterfactual conditions are encountered during pretraining. It is quite likely that there is text online that, for example, draws rotated versions of various objects used in our study. Consequently, the effect of overfitting could also manifest in our counterfac- tual conditions, and the default-counterfactual gap could actually be larger for some genuinely unseen conditions.
We also distinguish between two types of coun- terfactual perturbations. One type fundamentally affects the operation of the world model and ne- cessitates an understanding of the counterfactual world to perform the task in it (e.g., arithmetic base or 1-based indexing13). On the other hand, some perturbations are more superficial and may admit a shortcut where the model first figures out a simple mapping of the input back to the default condi- tions and performs the task (potentially leveraging instance-level memorization) under those. In some of our tasks, this mapping may be simple, such as
13It may be tempting to consider a simple replacement strategy [i]â[i-1] to map back to 0-based indexing. But this does not work for the dictionary type. There are other complications; see Table 2. | 2307.02477#55 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 55 | 15
⢠RGB image: the egocentric image comes from the camera facing forward, with screen size 256 à 512 and ï¬eld of view 60;
Depth image: the depth image has the same camera intrinsic parameters as the RGB image;
⢠Instance mask: it is an image where each object id is mapped to a color and the camera intrinsic parameters are the same as the RGB image;
⢠Agent position: the agentâs position in the simulation world;
Messages: the messages sent by all the agents.
Action Space The action space is similar to that in the original Watch-And-Help Challenge, with a new action sending message added. The detailed action space is listed here:
⢠Walk towards: move to an object in the same room with the agents or a room;
Turn left: turn left by 30 degrees;
Turn right: turn right by 30 degrees;
⢠Grasp: grasp an object, only the agent is close to the object can he perform the action successfully;
⢠Open: Open a closed container, only the agent is close to the container can he perform the action successfully;
⢠Close: Close an open container, only the agent is close to the container can he perform the action successfully;
⢠Put: Put the held objects into an open container or onto a surface, only the agent is close to the target position can he perform the action successfully; | 2307.02485#55 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 56 | Code Availability: 1https://github.com/rainym00d/LLM4RS, 2https://github.com/jizhi-zhang/FaiRLLM, 3https://github.com/Jyonn/GENRE-requests, 4https://github.com/RUCAIBox/LLMRank, 5https://github.com/RUCAIBox/iEvaLM-CRS, 6https://github.com/Linxyhaha/GeneRec, 7https://github.com/AGI-Edgerunners/LLM-Next-Item-Rec, 8https://github.com/JacksonWuxs/PromptRec, 9https://github.com/jwzhanggy/Graph Toolformer, 10will be available at https://gitee.com/mindspore/models/tree/master/research/recommend/KAR, 11https://github.com/rutgerswiselab/GenRec, 12https://anonymous.4open.science/r/LLM4Rec-Recsys.
# 5.1.3 Chain-of-Thought (CoT) Prompting | 2307.02046#56 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 56 | the word replacements in the natural language logi- cal reasoning task14 (§3.4) and the transformation functions for the drawing task (§3.6), which could potentially be exploited by the models. We explic- itly disallow this in our prompt for the drawing task (Table 7) but did not identify a good way to forbid this for logical reasoning, potentially accounting for its generally high counterfactual performance. Finally, we reiterate from §4 that a non-perfect CCC accuracy does not allow us to perfectly tease apart counterfactual performance and a failure of counterfactual condition comprehension. But often the default-counterfactual gap is so prominent that it is still strongly suggestive of overfitting to the de- fault conditions. Also, recall from §2 that the CCC itself is also a nontrivial task. For ThonPy, for ex- ample, the CCC also involves program evaluation, albeit with simpler statements that involve less rea- soning, such as print("qrstu"[4]). We do not see an easy way to introduce ThonPy CCC that is entirely disentangled from program evaluation. This conflation would result in the CCC accuracyâs being lower than what would reflect the modelâs understanding of the counterfactual conditions.
# 8 Related Work | 2307.02477#56 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 56 | ⢠Put: Put the held objects into an open container or onto a surface, only the agent is close to the target position can he perform the action successfully;
⢠Send message: Send a message to other agents. no more than 500 characters can be sent at a time.
B.2 ThreeDWorld Multi-Agent Transport
Placements Target Objects Containers G-8c:Co BS Task A: food P.-le eS > Bw Task B: stuff Figure 8: TDW-MAT scenes, target objects, and containers.
As an extension of the ThreeDWorld Transport Challenge[11], ThreeDWorld Multi-Agent Transport (TDW-MAT) supports multi-agent cooperation with natural language communication and includes more types of objects with more realistic placements. | 2307.02485#56 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 57 | # 5.1.3 Chain-of-Thought (CoT) Prompting
Although ICL has shown great effectiveness in prompting LLMs for downstream tasks with in-context demonstrations, recent studies indicate that LLMs still have limited perfor- mance in reasoning-heavy tasks [50]. More specifically, by prompting LLMs with in-context examples of input-output pairs, the answers directly generated by LLMs often suffer from missing one or a few intermediate reasoning steps in multi-step problems like mathematical equations, leading to a broken reasoning logic that causes errors in the subsequent reasoning steps (i.e., âone-step missing errorsâ [50]). Similar multi-step problems also exist in RecSys, such as the multi- step reasoning of user preferences based on the multi-turn dialogues in conversational recommendations. To address such limitations, CoT offers a special prompting strategy to enhance the reasoning ability of LLMs, by annotating intermediate reasoning steps to prompt. This enables LLMs to break down complicated decision-making processes and generate the final output with step-by-step reasoning.
Considering the suitable prompting strategies for adapt- ing LLMs to various downstream tasks with complex reasoning, Zhao et al. [19] discuss the combination of ICL and CoT prompting under two major settings: Few-shot CoT and Zero-shot CoT, as illustrated below. | 2307.02046#57 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 57 | # 8 Related Work
Evaluating Conceptual Structures in LMs. Much prior work has investigated the extent to which LMs acquire a grounded understanding of the world through text-only training (Piantadosi and Hill, 2022; Zhang et al., 2020; Ilharco et al., 2021; Li et al., 2021; i.a.). These works have gen- erally found that conceptual structures of certain concepts (e.g., color, size) often plausibly mir- ror those of the grounded world (Abdou et al., 2021; Patel and Pavlick, 2022; Mollo and Millière, 2023). As in our study, these studies are a test of generalizationâsuch structures would not manifest if the concepts were memorized in a one-hot-like manner. But our evaluation differs in that it targets the reasoning process instead of the generalization to new concepts or conceptual structures (Kondo | 2307.02477#57 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 57 | Tasks Two tasks are available in TDW-MAT: food-transporting task and stuff-transporting task. The two tasks have different types of target objects and containers. Figure 8 shows an overview of the two tasks: We have 2 ï¬oorplans and each of them has 3 layouts. The food-transporting task has 6 types of targets (apple, banana, orange, bread, loaf bread, and burger) and 3 containers (bowl, plate, and tea tray). In contrast, the stuff-transporting task has 6 different types of targets(iPhone, iPod, pen, lighter, purse, and key) and 3 containers (plastic basket, wood basket, and wicker basket). In each task, there are 10 target objects and 4 containers in total.
The task goal is transporting the target objects to the goal position. Agents need to transport target objects as more as possible within 3000 frames.
16
Observation Space The embodied agent receives the egocentric RGB image and depth image as the main observation, as well as some auxiliary observations. The detailed observation space is listed here:
⢠RGB image: the egocentric image comes from the camera facing forward, with screen size 256 à 256 and ï¬eld of view 120;
Depth image: the depth image has the same camera intrinsic parameters as the RGB image; ⢠Instance mask: it is an image where each object id is mapped to a color and the camera | 2307.02485#57 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 58 | think step by stepâ and âTherefore, the answer isâ to prompt, zero-shot CoT leads LLMs to generate task-specific reasoning steps independently, without providing any task-relevant instruction or grounding example.
⢠Few-shot CoT. Task-specific reasoning steps are manu- ally designed for each demonstration in ICL, where the original input-output examples are augmented to input- CoT-output manners. Besides, CoT can also augment the task descriptions in ICL demonstrations, by adding interpretable descriptions of reasoning steps based on task-specific knowledge.
In practice, the design of appropriate CoT reasoning steps highly depends on the contexts and objectives of the specific recommendation tasks. For example, a simple CoT template âPlease infer the preference of the user and recommend suitable items.â is proposed to guide LLMs to first infer the userâs explicit preference and then generate final recommendations [20]. So far, there is still a notable lack of research addressing the general format of CoT prompting for recommendations tasks. Next, we present a preliminary idea of CoT prompting, through an example in the context of e-commerce recommendations below.
⢠Zero-shot CoT. By inserting tricky texts such as âLetâs
10
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023 | 2307.02046#58 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 58 | 14To be more concrete, imagine that a model memorizes an instance with nine premises on dogs involving complex logical relationships, and that it entails a given conclusion. For the counterfactual instance, we replace the word âdogsâ with another object, say âheadphones,â to make the premises no longer factually true. Instead of performing the reasoning over premises with headphones such as how they are, coun- terfactually, the cutest creatures, a model could identify the mapping âdogsâ â âheadphonesâ, revert it (i.e., replace all âheadphonesâ back to âdogsâ), and perform the task under the default common-sense-complying conditions.
et al., 2023). While prior work identified that the latter is embedded in LMs, we found that they do not fully learn the former. | 2307.02477#58 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 58 | Depth image: the depth image has the same camera intrinsic parameters as the RGB image; ⢠Instance mask: it is an image where each object id is mapped to a color and the camera
intrinsic parameters are the same as the RGB image;
Agent position and rotation: the agentâs position and rotation in the simulation world; ⢠Messages: the messages sent by all the agents;
In TDW-MAT, there are 7 types of actions for agents to interact with the environment Action Space or communicate with each other. Each action takes several frames and the detailed action space is listed here:
Move forward: move forward 0.5m; ⢠Turn left: turn left by 15 degrees; ⢠Turn right: turn right by 15 degrees; ⢠Grasp: grasp an object, only the agent is close to the object can he perform the action
successfully. The object can be either a target or a container;
⢠Put In: put the target into the container, only the agent is holding a target in one hand and a container in another hand can he perform the action.
Drop: drop the objects held in hand; ⢠Send message: Send a message to other agents. In each frame, no more than 500 characters
can be sent.
# C Example Prompts
We show an example prompt for the Reasoning Module on C-WAH in Table 3, and an example prompt for the Reasoning Module on TDW-MAT in Table 5.
17 | 2307.02485#58 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 59 | 10
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
[CoT Prompting] Based on the user purchase history, letâs think step-by-step. First, please infer the userâs high-level shopping intent. Second, what items are usually bought together with the purchased items? Finally, please select the most relevant items based on the shopping intent and recommend them to the user.
Despite the limited number of works on CoT prompting in the RecSys field, a recent research [106] has revealed the great effectiveness of adopting CoT prompting to facilitate the graph reasoning ability of LLMs (T5 particularly) by modeling the reasoning steps as nodes and connecting the reasoning paths as edges instead of a sequential chain. We believe that similar ideas can be potentially transferred, and contribute to the CoT prompting for RecSys, based on the fact that recommendation tasks can be considered as a special case of link prediction problems in graph learning.
# 5.2 Prompt Tuning | 2307.02046#59 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 59 | et al., 2023). While prior work identified that the latter is embedded in LMs, we found that they do not fully learn the former.
Causal Analysis. Our counterfactual perturba- tions can be informally viewed as interven- tions under a causal inference framework (Pearl, 2009). This relationship has been explored in ma- chine learning and NLP for commonsense reason- ing (Kıcıman et al., 2023), interpretability (Elazar et al., 2021; Geiger et al., 2021, 2022), spurious cor- relation detection (Veitch et al., 2021; Eisenstein, 2022), and fairness (Kusner et al., 2017; Nabi and Shpitser, 2018). Under this perspective, the fail- ure of generalization to counterfactual worlds that we observe in LMs can be viewed as a failure to robustly learn the causal effects of world states on our evaluated tasks. | 2307.02477#59 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 60 | # 5.2 Prompt Tuning
In contrast to manually prompting LLMs for downstream tasks (e.g., manually generate task-specific prompt in natural language), prompt tuning serves as an additive technique of prompting, which adds new prompt tokens to LLMs and optimizes the prompt based on the task-specific dataset. Generally, prompt tuning requires less task-specific knowl- edge and human effort than manually designing prompts for specific tasks and only involves minimal parameter updates of the tunable prompt and the input layer of LLMs. For example, AutoPrompt [107] takes the step of decomposing prompt into a set of vocabulary tokens, and finding the suitable tokens to language models via gradient-based search with respect to the performance on specific tasks.
According to the definition, prompts can be either discrete (i.e., hard) or continuous (i.e., soft) that guide LLMs to generate the expected output [108]. Thus, we categorize prompt tuning strategies for prompting LLMs for RecSys into hard prompt tuning and soft prompt tuning, as illustrated below.
# 5.2.1 Hard Prompt Tuning | 2307.02046#60 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 60 | Counterfactual Evaluation. âCounterfactualsâ is an informally-used term in NLP and has been used to refer to different types of perturbations. One line of work concerns counterfactuals to a certain event or situation that is still licensed in a default world model (Qin et al., 2019, 2020; Yang et al., 2020; Frohberg and Binder, 2022; i.a.), in contrast to our counterfactual world states that de- viate from the default. Qin et al. (2019) and Fro- hberg and Binder (2022) found that GPT-3 and earlier models struggle with consistently reason- ing under this type of counterfactual conditions, while Kıcıman et al. (2023) observed more recent LMs to achieve higher counterfactual reasoning accuracy. Another body of work examines the ro- bustness of model predictions using counterfactual data (Kaushik et al., 2020, 2021; Gardner et al., 2020). More similar to our study, Li et al. (2023b) showed that while the LMs they investigated seem to be able to perform some reasoning in counterfac- tual worlds, this is largely affected by superficial lexical cues. Our results reveal that more recent LMs still exhibit such difficulties.
# 9 Conclusion | 2307.02477#60 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 60 | I â m Alice . I â m in a hurry to finish the housework with my friend Bob together . Given our shared goal , dialogue history , and my progress and previous actions , please help me choose the best available action to achieve the goal as soon as possible . Note that I can hold two objects at a time and there are no costs for holding objects . All objects are denoted as < name > ( id ) , such as < table > (712) . Goal : Find and put 1 wine , 1 pancake , 1 poundcake , 1 juice , 1 apple onto the < kitchentable > (130) . Progress : I â m holding nothing . I â m in the < kitchen > , where I found < kitchentable > (130) , < apple > (386) , and unchecked containers < kitchencabinet > (137) , < kitchencabinet > (138) , < kitchencabinet > (139) , < kitchencabinet > (140) , < stove > (157) , < dishwasher > (159) , < fridge > (165) , < fridge > (166) , < microwave > (167) . I also see Bob here in the < kitchen > , he is holding < apple > (387) . I found an unchecked container | 2307.02485#60 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 61 | # 5.2.1 Hard Prompt Tuning
Hard prompt tuning is to generate and update discrete text templates of prompt (e.g., in natural language), for prompting LLMs to specific downstream tasks. Dong et al. [108] argue that ICL can be considered as a subclass of hard prompt tuning and regard the in-context demonstrations in ICL as a part of the prompt. From this perspective, ICL performs hard prompt tuning for prompting LLMs to downstream recommendation tasks by refining prompts in natural language based on task-specific recommendation datasets. Despite the effectiveness and convenience of generating or refining natural language prompts for downstream recommendation tasks, hard prompt tuning inevitably faces the challenge of discrete optimization, which requires laborious trial and error to discover the vast vocabulary space in order to find suitable prompts for specific recommendation tasks.
# 5.2.2 Soft Prompt Tuning
In contrast to discrete prompt, soft prompt tuning employs continuous vectors as prompt (e.g., text embeddings), and optimizes the prompt based on task-specific datasets, such as
using gradient methods to update the prompt with respect to a recommendation loss. In LLMs, soft prompt tokens are often concatenated to the original input tokens at the input layer (e.g., tokenizer). During soft prompt tuning, only the soft prompt and minimal parameters at the input layer of LLMs will be updated. | 2307.02046#61 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 61 | # 9 Conclusion
Through our counterfactual evaluation on 11 tasks, we identified a consistent and substantial degrada- tion of LM performance under counterfactual con- ditions. We attribute this gap to overfitting to the default task variants, and thus encourage future LM analyses to explicitly consider abstract task ability as detached from observed task performance, es- pecially when these evaluated task variants might exist in abundance in the LM pretraining corpora.
# Acknowledgments | 2307.02477#61 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 61 | , < microwave > (167) . I also see Bob here in the < kitchen > , he is holding < apple > (387) . I found an unchecked container < bathroomcabinet > (25) in the bathroom . The livingroom is unexplored . The bedroom is unexplored . Dialogue history : Alice : " Hi , I â ll let you know if I find any goal objects and finish any subgoals , and ask for your help when necessary ." Bob : " Thanks ! I â ll let you know if I find any goal objects and finish any subgoals , and ask for your help when necessary ." Alice : " Bob , I â m in the kitchen with the < kitchentable > (130) and found < apple > (386) . Please check the living room and bedroom for the other items . I will search the kitchen cabinets and appliances ." Bob : " Alice , I found an < apple > (387) and I â m in the < livingroom >. Can you search the kitchen for the other items ? I â ll check the bathroom and bedroom next ." Previous actions : [ goexplore ] < bathroom > (11) , [ goexplore ] < kitchen > (56) , [ send_message ] , [ gocheck ] < kitchencabinet > (133) , [ | 2307.02485#61 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 62 | To improve the recommendation performance of LLMs, some existing works combine advanced feature extraction and representation learning methods to better capture and embed task-specific information in RecSys into soft prompts. For instance, Wu et al. [109] apply contrastive learning to capture user representations and encode them into prompt tokens, and Wang et al. [63] and Guo et al. [110] share the similar idea of encoding mutual information in cross-domain recommendations into soft prompt. In addition to directly embedding task-specific information into soft prompt, soft prompt can also be learned based on task-specific datasets. For example, randomly initialized soft prompts are adopted to guide T5 to generate desired recommendation results [104], where the soft prompt is optimized in an end-to-end manner with respect to a recommendation loss based on the T5 output. Compared to the hard prompt, the soft prompt is more feasible for tuning on continuous space but in a cost of explainability [104]. In other words, compared to task- specific hard prompt in a natural language like âYour task is to recommend ...â, the relationships between the specific downstream tasks and the soft prompt written in continuous vectors are not interpretable to humans.
# 5.3 Instruction Tuning | 2307.02046#62 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 62 | # Acknowledgments
We thank, alphabetically, Alex Gu, Alisa Liu, Be- linda Li, Chenghao Yang, Han Guo, Hao Peng, Heyun Li, Jesse Dodge, Pratyusha Sharma, Tiwa Eisape, and Yizhong Wang for helpful discussions and feedback for this work. We are also grateful to Simeng Han for providing us with an updated version of the FOLIO dataset. Our drawing eval- uation would not have been possible without our annotators Alex Hu, Ananya Harsh Jha, Belinda Li, Erjia Cao, Ha-na Park, Huirong Wen, Jiangjie Chen, Kabir Swain, Ka Wai Chan, Lucy Li, Sim- ran Swain, Tejas Srinivasan, Tianyu Liu, Yue Bai, Yutaro Yamada, and Ziwei Wei. Zhaofeng would like to thank Jiamin Zhang for the guitar lessons, which were short but helpful for the relevant com- ponents of this paper. Figure 1 uses icons from flaticon.com. This study was supported by funds from the MITâIBM Watson AI Lab, the MIT Quest for Intelligence, and the National Science Founda- tion under grants IIS-2212310 and IIS-2238240.
# References | 2307.02477#62 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 62 | (11) , [ goexplore ] < kitchen > (56) , [ send_message ] , [ gocheck ] < kitchencabinet > (133) , [ gocheck ] < kitchencabinet > (134) , [ gocheck ] < kitchencabinet > (135) , [ gocheck ] < kitchencabinet > (136) Available actions : A . [ send_message ] <" Bob , I see you have an < apple > (387) . Please put it on the < kitchentable > (130) and check the bedroom while I search the remaining kitchen cabinets and appliances for the other items ." > B . [ goexplore ] < livingroom > (198) C . [ goexplore ] < bedroom > (294) D . [ gocheck ] < kitchencabinet > (137) E . [ gocheck ] < kitchencabinet > (138) F . [ gocheck ] < kitchencabinet > (139) G . [ gocheck ] < kitchencabinet > (140) H . [ gocheck ] < stove > (157) I . [ gocheck ] < dishwasher > (159) J . [ gocheck ] < fridge > (165) K . [ gocheck ] < fridge | 2307.02485#62 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 63 | # 5.3 Instruction Tuning
Although prompting LLMs has demonstrated remarkable few-shot performance on unseen downstream tasks, recent studies demonstrated that prompting strategies have much poorer zero-shot ability [87]. To address the limitations, instruction tuning is proposed to fine-tune LLMs over multiple task-specific prompts. In other words, instruction tuning possesses features of both prompting and pre-training & fine-tuning paradigms. This helps LLMs gain better capabilities of exactly following prompts as instructions for diverse downstream tasks, which hence contributes to the enhanced zero-shot performance of LLMs on unseen tasks by accurately following new task instructions. The key insight of instruction tuning is to train LLMs to follow prompts as task instructions, rather than to solve specific downstream tasks. More specifically, instruction tuning can be divided into two stages: âinstructionâ (i.e., prompt) generation and model âtuningâ, since the straightforward idea of instruction tuning is the combination of prompting and fine-tuning LLMs. | 2307.02046#63 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 63 | # References
Mostafa Abdou, Artur Kulmizev, Daniel Hersh- covich, Stella Frank, Ellie Pavlick, and Anders Søgaard. 2021. Can language models encode perceptual structure without grounding? a case study in color. In Proceedings of the 25th Con- ference on Computational Natural Language Learning, pages 109â132, Online. Association for Computational Linguistics.
Andrea Agostinelli, Timo I. Denk, Zalán Borsos, Jesse Engel, Mauro Verzetti, Antoine Caillon, Qingqing Huang, Aren Jansen, Adam Roberts, Marco Tagliasacchi, Matt Sharifi, Neil Zeghi- dour, and Christian Frank. 2023. MusicLM: Generating music from text. | 2307.02477#63 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 64 | ⢠Instruction (Prompt) Generation Stage. Formally, in- struction tuning introduces a format of instruction- based prompt in natural language, which composes of task-oriented input (i.e., task descriptions based on task-specific dataset) and desired target (i.e., cor- responding output based on task-specific dataset) pairs. Considering the instruction tuning of LLMs for down- stream recommendation tasks, Zhang et al. [20] pro- pose a recommendation-oriented instruction template, including user preferences, intentions, and task forms, which serves as a common template for generating instructions for various recommendation tasks. More
11
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
directly, three-part instruction templates in the form of âtask description-input-outputâ are used in [59], [61] to generate instructions based on task-specific recommendation datasets. | 2307.02046#64 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 64 | Rohan Anil, Andrew M. Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bai- ley, Zhifeng Chen, Eric Chu, Jonathan H. Clark, Laurent El Shafey, Yanping Huang, Kathy Meier-Hellstern, Gaurav Mishra, Erica Moreira, Mark Omernick, Kevin Robinson, Sebastian Ruder, Yi Tay, Kefan Xiao, Yuanzhong Xu, Yu- jing Zhang, Gustavo Hernandez Abrego, Jun- whan Ahn, Jacob Austin, Paul Barham, Jan Botha, James Bradbury, Siddhartha Brahma, | 2307.02477#64 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 64 | 18
# LLM outputs
First , we know that Bob is already holding an < apple > (387) and is in the < livingroom >. We also know that we need to find and put 1 wine , 1 pancake , 1 poundcake , 1 juice , and 1 apple onto the < kitchentable > (130) . Since we are already in the < kitchen > and there are several unchecked containers , the best course of action is to continue searching the kitchen cabinets and appliances for the remaining items . So , the best available action for Alice is : D . [ gocheck ] < kitchencabinet > (137)
19
Table 5: Example prompt for the Reasoning Module on TDW-MAT
# TDW-MAT Prompts | 2307.02485#64 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 65 | Model Tuning Stage. The second stage is to fine-tune LLMs over multiple aforementioned instructions for downstream tasks, where we categorize the existing works on RecSys, as shown in Table 3, according to the LLMs fine-tuning manners: full-model tuning and parameter-efficient model tuning (see Section 4.2 for explanations), since basically the same principles of fine- tuning LLMs are adopted in this stage. For example, Bao et al. [59] utilize LoRA to make the instruction tuning of LLaMA more lightweight for downstream recommendation tasks. In addition to textual data in RecSys, instruction tuning is recently explored to enhance the graph understanding ability of LLMs for recommendation tasks. In particular, Wu et al. [81] propose an LLM-based prompt constructor to encode the paths of nodes (e.g., candidate items) and edges (e.g., relationships between items) in behavior graphs into natural language descriptions, which is subsequently used for instruction tuning an LLM-based recommender based on task-specific dataset.
6 FUTURE DIRECTIONS In this survey, we have comprehensively reviewed the recent advanced techniques for LLM-enhanced recommender systems. Since the adaption of LLMs to recommender systems is still in the early stage, there are still many challenges and opportunities. In this section, we discuss some potential future directions in this field.
# 6.1 Hallucination Mitigation | 2307.02046#65 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 65 | Kevin Brooks, Michele Catasta, Yong Cheng, Colin Cherry, Christopher A. Choquette-Choo, Aakanksha Chowdhery, Clément Crepy, Shachi Dave, Mostafa Dehghani, Sunipa Dev, Jacob De- vlin, Mark DÃaz, Nan Du, Ethan Dyer, Vlad Fein- berg, Fangxiaoyu Feng, Vlad Fienber, Markus Freitag, Xavier Garcia, Sebastian Gehrmann, Lu- cas Gonzalez, Guy Gur-Ari, Steven Hand, Hadi Hashemi, Le Hou, Joshua Howland, Andrea Hu, Jeffrey Hui, Jeremy Hurwitz, Michael Is- ard, Abe Ittycheriah, Matthew Jagielski, Wen- hao Jia, Kathleen Kenealy, Maxim Krikun, Sneha Kudugunta, Chang Lan, Katherine Lee, Benjamin Lee, Eric Li, Music Li, Wei Li, YaGuang Li, Jian Li, Hyeontaek Lim, Hanzhao Lin, Zhongtao Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam | 2307.02477#65 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 65 | I â m Alice . My friend Bob and I want to transport as many target objects as possible to the bed with the help of containers within 3000 steps . I can hold two things at a time , and they can be objects or containers . I can grasp containers and put objects into them to hold more objects at a time . Given our shared goal , dialogue history , my progress , and previous actions , please help me choose the best available action to achieve the goal as soon as possible . Note that a container can contain three objects , and will be lost once transported to the bed . I can only put objects into the container I hold after grasping it . All objects are denoted as < name > ( id ) , such as < table > (712) . Actions take several steps to finish . It may be costly to go to another room or transport to the bed , use these actions sparingly . Goal : Transport 3 pens , 1 lighter , 3 ipods , 2 purses , 1 key to the bed . Progress : I â ve taken 1313/3000 steps . We â ve already transported <key > (3207585) , < purse > (15433283) , < ipod > (6544816) , < purse > (11543537) , <pen > (12835254) to | 2307.02485#65 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 66 | # 6.1 Hallucination Mitigation
Although LLMs are used in various fields, a significant chal- lenge is the phenomenon of âhallucinationâ, where language models generate outputs that are plausible-sounding but factually incorrect or not referable in the input data [111], [112]. For instance, considering a scenario where you are seeking todayâs news events, the LLMs erroneously recommend/generate news that, in fact, does not exist. The causes of this problem are manifold such as source-reference divergence existing in dataset, and training&modeling choices of neural network models [113]. Moreover, the hallucination issue poses severe threats to users and society, especially in high-stakes recommendation scenarios such as medical recommendations or legal advice, where the dissemination of incorrect information can have severe real consequences. To address such issues, employing factual knowledge graphs as supplementary factual knowledge during the training and inference stages of LLMs for RecSys is promising to mitigate the hallucination problem. In addition, the modelâs output stage can be scrutinized to verify the accuracy and factuality of the produced content.
# 6.2 Trustworthy Large Language Models for Recom- mender Systems
The development of LLMs for RecSys has brought significant benefits to humans, including economic value creation, time | 2307.02046#66 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 66 | Liu, Frederick Liu, Marcello Maggioni, Aroma Mahendru, Joshua Maynez, Vedant Misra, Maysam Moussalem, Zachary Nado, John Nham, Eric Ni, Andrew Nystrom, Alicia Parrish, Marie Pellat, Martin Polacek, Alex Polozov, Reiner Pope, Siyuan Qiao, Emily Reif, Bryan Richter, Parker Riley, Alex Cas- tro Ros, Aurko Roy, Brennan Saeta, Rajku- mar Samuel, Renee Shelby, Ambrose Slone, Daniel Smilkov, David R. So, Daniel Sohn, Si- mon Tokumine, Dasha Valter, Vijay Vasude- van, Kiran Vodrahalli, Xuezhi Wang, Pidong Wang, Zirui Wang, Tao Wang, John Wieting, Yuhuai Wu, Kelvin Xu, Yunhan Xu, Linting Xue, Pengcheng Yin, Jiahui Yu, Qiao Zhang, Steven Zheng, Ce Zheng, Weikang Zhou, Denny Zhou, Slav Petrov, and Yonghui Wu. 2023. PaLM 2 technical report. | 2307.02477#66 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 66 | < purse > (15433283) , < ipod > (6544816) , < purse > (11543537) , <pen > (12835254) to the bed . I â m holding nothing . I â m in the < Bedroom > (2000) , where I â ve explored all of it and found the goal position bed . Last time I saw Bob was in the < Office > (3000) , he was holding nothing . I â ve explored part of the < Livingroom > (1000) , and I found a container < wicker_basket > (13929244) there . I â ve explored none of the < Office > (3000) , and I found target objects < ipod > (2031829) , < ipod > (16421571) there . I â ve explored part of the < Livingroom > (4000) . I â ve explored none of the < Kitchen > (5000) . I â ve explored all of the < Livingroom > (6000) , and I found a target object <pen > (1440084) there . Dialogue history : Alice : " Hi , I â ll let you know if I find any target objects and containers , finish any subgoals , and ask for your help when necessary ." Bob : " Thanks ! I â ll let you know | 2307.02485#66 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 67 | # 6.2 Trustworthy Large Language Models for Recom- mender Systems
The development of LLMs for RecSys has brought significant benefits to humans, including economic value creation, time
and effort savings, and social benefits. However, these data- driven LLMs for RecSys might also pose serious threats to users and society [5], [114], [115], due to unreliable decisions making, unequal treatment of various consumers or producers, a lack of transparency and explainability, and privacy issues stemming from the extensive use of personal data for customization, among other concerns. As a result, there is an increasing concern about the issue of trustworthiness in LLMs for RecSys to mitigate the negative impacts and enhance public trust in LLM-based RecSys techniques. Thus, it is desired to achieve trustworthiness in LLMs for RecSys from four of the most crucial dimensions, including Safety&Robustness, Non-discrimination&Fairness, Explainability, and Privacy.
# 6.2.1 Safety&Robustness | 2307.02046#67 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 67 | Anthropic. 2023. Introducing Claude.
Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Her- nandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, and Jared Kaplan. 2022. Training a helpful and harm- less assistant with reinforcement learning from human feedback.
GaÅ¡per BeguÅ¡, Maksymilian D Ëabkowski, and Ryan Rhodes. 2023. Large linguistic models: Analyzing theoretical linguistic abilities of LLMs. ArXiv preprint, abs/2305.00948.
Probing classifiers: Promises, shortcomings, and advances. Com- putational Linguistics, 48(1):207â219. | 2307.02477#67 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 67 | let you know if I find any target objects and containers , finish any subgoals , and ask for your help when necessary ." Bob : " Thanks ! I â ll let you know if I find any target objects and containers , finish any subgoals , and ask for your help when necessary ." Alice : " Bob , I suggest you transport the basket with key and purse to the bed in the Bedroom (2000) . Meanwhile , I â ll go to the Livingroom (6000) to grab the pen and the container from Livingroom (1000) to collect more items . Let â s meet at the bed afterward to plan further ." Bob : " Alice , I â m heading to the Bedroom to transport the basket with key and purse . After you grab the pen and container , let â s meet at the bed to plan our next steps . Let me know if you find any other target objects ." Bob : " Alice , I just transported the key and purse to the bed . Please grab the pen and container you found and head to the bed . Let me know if you find any other target objects or containers on your way ." Previous actions : go to < Bedroom > (2000) at step 396 , send a message at step 584 , go grasp container < plastic_basket > (15946891) at step 585 , go grasp target object < | 2307.02485#67 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 68 | # 6.2.1 Safety&Robustness
LLMs have been proven to advance recommender systems in various aspects, but they are also highly vulnerable to adversarial perturbations (i.e., minor changes in the input) that can compromise the safety and robustness of their uses in safety-critical applications [44], [114]. These vulnerabilities towards noisy inputs are frequently carried out with malicious intent, such as to gain unlawful profits and manipulate markets for specific products [116]â[119]. Therefore, it is crucial to ensure that the output of LLMs for recommender systems is stable given small changes in the LLMsâ input. In order to enhance model safety and robustness, GPT-4 integrates safety-related prompts during reinforcement learning from human feedback (RLHF) [120]. However, the RLHF method requires a significant number of experts for manual labeling, which might not be feasible in practice. An alternative solution might involve the automatic pre-processing of prompts designed for recommender tasks before input to LLMs. This could include pre-processing for malicious prompts or standardizing prompts with similar purposes to have the same final input, thus potentially improving safety and robustness. In addition, as one of the representative techniques, adversarial training [121] can be used to improve the robustness of LLM-based recommender systems.
# 6.2.2 Non-discrimination&Fairness | 2307.02046#68 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02485 | 68 | at step 396 , send a message at step 584 , go grasp container < plastic_basket > (15946891) at step 585 , go grasp target object < ipod > (6544816) at step 880 , put < ipod > (6544816) into the container < plastic_basket > (15946891) at step 889 , go grasp target object < purse > (11543537) at step 929 , put < purse > (11543537) into the container < plastic_basket > (15946891) at step 1033 , go grasp target object <pen > (12835254) at step 1057 , put <pen > (12835254) into the container < plastic_basket > (15946891) at step 1098 , transport objects I â m holding to the bed at step 1131 | 2307.02485#68 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 69 | # 6.2.2 Non-discrimination&Fairness
LLMs, trained on vast datasets, often inadvertently learn and perpetuate biases and stereotypes in the human data that will later reveal themselves in the recommendation results. This phenomenon can lead to a range of adverse outcomes, from the propagation of stereotypes to the unfair treatment of certain user groups [2], [122], [123]. For instance, in the context of recommender systems, these biases can manifest as discriminatory recommendations, where certain items are unfairly promoted or demoted based on these learned biases. More recently, a few studies such as FaiRLLM [89] and UP5 [104] explore the fairness problem in recommender systems brought by LLMs, which only focus on user-side and item generation task. Concurrently, Hou et al. [91] guide LLMs with prompts to formalize the recommendation task as a conditional ranking task to improve item-side fairness. However, studies on non-discrimination and fairness in LLMs for RecSys are at a preliminary stage, further research is still needed.
12
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
# 6.2.3 Explainability | 2307.02046#69 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 69 | Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhari- wal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christo- pher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Rad- ford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Sys- tems 33: Annual Conference on Neural Infor- mation Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual. | 2307.02477#69 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 69 | 20
# TDW-MAT Prompts
Available actions : ( You can only choose the action in the list ) A . send a message : " Bob , I transported an ipod , purse , and pen to the bed . I found a container < wicker_basket > (13929244) in the < Livingroom > (1000) and target objects < ipod > (2031829) and < ipod > (16421571) in the < Office > (3000) . Please collect the ipods and explore the < Office > for more objects . I â ll collect the container and pen from the < Livingroom > and meet you at the bed ." B . go grasp target object <pen > (1440084) C . go grasp target object < ipod > (2031829) D . go grasp target object < ipod > (16421571) E . go grasp container < wicker_basket > (13929244) F . go to < Livingroom > (1000) G . go to < Office > (3000) H . go to < Livingroom > (4000) I . go to < Kitchen > (5000) J . go to < Livingroom > (6000) Answer : Let â s think step by step .
# LLM outputs | 2307.02485#69 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 70 | 12
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
# 6.2.3 Explainability
Owing to privacy and security considerations, certain companies and organizations choose not to open-source their advanced LLMs, such as ChatGPT and GPT-4, indicating that the architectures and parameters of these LLMs for RecSys are not publicly available for the public to understand their complex internal working mechanisms. Consequently, LLMs for RecSys can be treated as the âblack boxâ, complicating the process for users trying to comprehend why a specific output or recommendation was produced. Recently, Bills et al. [124] try to use GPT-4 to generate natural language descriptions to explain the neuronal behavior in the GPT-2 model. While this study is foundational, it also introduces fresh perspectives for comprehending the workings of LLMs. Neurons exhibit intricate behaviors that may not be easily encapsulated through simple natural language. To this end, efforts should be made to understand how LLMs for RecSys function, so as to enhance the explainability of LLM-based recommender systems.
# 6.2.4 Privacy | 2307.02046#70 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 70 | Sébastien Bubeck, Varun Chandrasekaran, Ro- nen Eldan, Johannes Gehrke, Eric Horvitz, Ece Kamar, Peter Lee, Yin Tat Lee, Yuanzhi Li, Scott Lundberg, Harsha Nori, Hamid Palangi, Marco Tulio Ribeiro, and Yi Zhang. 2023. Sparks of artificial general intelligence: Early experiments with GPT-4.
Nicholas Carlini, Florian Tramer, Eric Wallace, Matthew Jagielski, Ariel Herbert-Voss, Kather- ine Lee, Adam Roberts, Tom Brown, Dawn Song, Ulfar Erlingsson, Alina Oprea, and Colin Raffel. 2020. Extracting training data from large language models. ArXiv preprint, abs/2012.07805.
Kamalika Chaudhuri, Brighten Godfrey, and David Ratajczak. 2003. On the complexity of the game of SET.
Mark Chen, Jerry Tworek, Heewoo Jun, Qim- ing Yuan, Henrique Ponde de Oliveira Pinto, | 2307.02477#70 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 70 | # LLM outputs
First , we need to collect the remaining target objects . You should go to the < Livingroom > (6000) to grasp the <pen > (1440084) since it â s a target object . So , the best available action is : B . go grasp target object <pen > (1440084)
# D Template Language Communication for MCTS-based HP Agent
To enable the MCTS-based Hierarchical Planner to also leverage the new communication capability, we design three types of template language communication for it.
Progress Sharing P An important motivation for communication is to share progress with others to update each other about the total progress of the task. So we design the ï¬rst template communication to share progress.
When the agent carries out the action of put, which implies a new subgoal has been achieved by the agent, it will send a message such as:
âPâ: âI successfully put poundcake <383> on kitchentable <130>, and they are in kitchen <56>. â
When the agent receives such a message, it will process it and extract the sub-goal satisï¬ed, and use it to update its inner tracking of the task progress, so avoiding taking an already satisï¬ed sub-goal as a sub-goal again to better cooperate. | 2307.02485#70 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 71 | # 6.2.4 Privacy
Privacy is a paramount concern when it comes to LLMs for RecSys. The reasons for this are multifold. On the one hand, the success of LLMs for recommender systems highly depends on large quantities of data that are collected from a variety of sources, such as social media and books. Usersâ sensitive information (e.g., email and gender) contained in data is likely to be used to train modern LLMs for enhancing prediction performance and providing personalized experiences, leading to the risk of leaking usersâ private information. On the other hand, these systems often handle sensitive user data, including personal preferences, online behaviors, and other identifiable information. If not properly protected, this data could be exploited, leading to breaches of privacy. Therefore, ensuring the privacy and security of this data is crucial. Carlini et al. [125] show that LLMs might reveal some usesâ real identity or private information when generating text. Recently, Li et al. [126] introduce RAPT that allows users to customize LLMs with their private data based on prompt tuning. It provides a direction on how to protect user privacy at LLMs for RecSys.
# 6.3 Vertical Domain-Specific LLMs for Recommender Systems | 2307.02046#71 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 71 | Mark Chen, Jerry Tworek, Heewoo Jun, Qim- ing Yuan, Henrique Ponde de Oliveira Pinto,
Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, Alex Ray, Raul Puri, Gretchen Krueger, Michael Petrov, Heidy Khlaaf, Girish Sastry, Pamela Mishkin, Brooke Chan, Scott Gray, Nick Ryder, Mikhail Pavlov, Alethea Power, Lukasz Kaiser, Mo- hammad Bavarian, Clemens Winter, Philippe Tillet, Felipe Petroski Such, Dave Cummings, Matthias Plappert, Fotios Chantzis, Elizabeth Barnes, Ariel Herbert-Voss, William Hebgen Guss, Alex Nichol, Alex Paino, Nikolas Tezak, Jie Tang, Igor Babuschkin, Suchir Balaji, Shan- tanu Jain, William Saunders, Christopher Hesse, Andrew N. Carr, Jan Leike, Josh Achiam, Vedant Misra, Evan Morikawa, Alec Radford, Matthew Knight, Miles Brundage, Mira Murati, Katie Mayer, Peter Welinder, Bob McGrew, Dario Amodei, Sam McCandlish, Ilya Sutskever, and Wojciech Zaremba. 2021. Evaluating large lan- guage models trained on code. | 2307.02477#71 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 71 | Intent Sharing I Another important motivation for communication is to share intent with each other so that all the agents can plan coordinately together. So we design a template communication to share intent.
When the agent changes its sub-goal (practically, the Monte Carlo Tree Search High-Level Planner gives a new plan), it will tell the other agents its current sub-goal by sending a message such as:
âIâ: âNow I want to put cutleryfork <369> in dishwasher <104>, and I have not found it yet. â
21
When the agent receives such a message, it will process it and extract the other agentsâ new sub-goal and update its belief about the othersâ intents, so it will not choose the same sub-goal with the others to avoid duplicate and improve efï¬ciency.
Belief Sharing B Sharing the scenes the agent just sees to the other agents can help them update their belief of the location of the object as well, and more importantly, this can help agents to build common ground on the belief of the objects to better cooperate together. So we also design a template communication to share beliefs.
When entering a new room, the agent will send all goal objects found or containers newly checked with no ï¬ndings or target objects in it to others, such as: | 2307.02485#71 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 72 | # 6.3 Vertical Domain-Specific LLMs for Recommender Systems
General LLMs, such as ChatGPT, whose powerful generation and inference capabilities make them a universal tool in various areas. Vertical domain-specific LLMs are LLMs that have been trained and optimized for a specific domain or industry, such as health [127] and finance [56]. Compared to general LLMs for RecSys, vertical domain-specific LLM- empowered RecSys are more focused on the knowledge and skills of a particular domain and have a higher degree of domain expertise and practicality. Instead of sifting through irrelevant information, users can focus on content that is directly aligned with their work or personalized preferences. By providing tailored recommendations, vertical domain- specific LLMs for RecSys can save professionals a significant amount of time. More recently, existing works have presented vertical domain-specific LLMs that cover a wide range of areas, such as medical care [128], [129], law [130], [131], | 2307.02046#72 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02485 | 72 | When entering a new room, the agent will send all goal objects found or containers newly checked with no ï¬ndings or target objects in it to others, such as:
âBâ: âI found nothing is inside kitchencabinet <75>. nothing is inside kitchencabinet <76>. nothing is inside dishwasher <104>. nothing is inside cabinet <216>. cutleryfork <369>, cutleryfork <370> and plate <373> are inside kitchen <11>.â
When the agent receives such a message, it will process and extract the information maintained in the message to update its belief of the location distributions of the objects just as it has been seen by itself.
Also to be noticed, the agents may combine these three types of template communication to send one combined message at one time instead of multiple messages over several steps to improve efï¬ciency.
22 | 2307.02485#72 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 73 | and finance [132]. Due to trained specifically, these vertical domain-specific LLMs can better understand and process domain-specific knowledge, terminology and context. Yet the requirement for vast amounts of domain-specific data to train these models poses significant challenges in data collection and annotation. As such, constructing high-quality domain datasets and using suitable tuning strategies for specific domains are necessary steps in the development of vertical domain-specific LLMs for RecSys. In particular, Jin et al. [133] propose a multilingual dataset named Amazon-M2 as a new setting of session-based recommendations from Amazon (i.e., sessions containing the interacted items of users) and inspire the opportunities to leverage LLMs as RecSys to learn on session graphs with multilingual and textual data, such as item (node) attributes including product titles, prices, and descriptions across session graphs of users from different locales (multilingual).
# 6.4 Users&Items Indexing | 2307.02046#73 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 73 | Aakanksha Chowdhery, Sharan Narang, Jacob De- vlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Sebastian Gehrmann, Parker Schuh, Kensen Shi, Sasha Tsvyashchenko, Joshua Maynez, Abhishek Rao, Parker Barnes, Yi Tay, Noam Shazeer, Vinodkumar Prab- hakaran, Emily Reif, Nan Du, Ben Hutchin- son, Reiner Pope, James Bradbury, Jacob Austin, Michael Isard, Guy Gur-Ari, Pengcheng Yin, Toju Duke, Anselm Levskaya, Sanjay Ghemawat, Sunipa Dev, Henryk Michalewski, Xavier Garcia, Vedant Misra, Kevin Robin- son, Liam Fedus, Denny Zhou, Daphne Ip- polito, David Luan, Hyeontaek Lim, Barret Zoph, Alexander Spiridonov, Ryan Sepassi, David Dohan, Shivani Agrawal, Mark Omer- nick, Andrew M. Dai, Thanumalayan Sankara- narayana Pillai, Marie Pellat, Aitor | 2307.02477#73 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 74 | # 6.4 Users&Items Indexing
Recent research suggests that LLMs may not perform well when dealing with long texts in RecSys, as it can be difficult to effectively capture user-item interaction information in long texts [91]. On the other hand, user-item interactions (e.g., click, like, and subscription) with unique identities (i.e., discrete IDs) in recommender systems contain rich collaborative knowledge and make great contributions to understanding and predicting user preferences, encompass- ing both explicit actions like ratings and reviews, as well as implicit behaviors like browsing history or purchase data. Several studies, including InstructRec [20], PALR [61], GPT4Rec [134] and UP5 [104], have attempted to utilize user-item history interaction information as text prompts inputted into LLMs (e.g., ChatGPT) in order to make recommendations. To address the long text problem, one possible solution is to perform user and item indexing for learning collaborative knowledge by incorporating user-item interactions. Therefore, rather than merely using text formats to represent users and items, advanced methods for indexing users&items are desired to build LLM-based recommender systems.
# 6.5 Fine-tuning Efficiency | 2307.02046#74 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02046 | 75 | # 6.5 Fine-tuning Efficiency
In the application of LLMs to RecSys, fine-tuning refers to the process of adapting a pre-trained LLM to a specific task or domain, such as recommending movies [61] or books [59]. This process allows the model to leverage the general language understanding capabilities learned during pre-training while specializing its knowledge to the task at hand. However, fine-tuning can be computationally expensive, particularly for very large models and large datasets in recommender systems. Therefore, improving the efficiency of fine-tuning is a key challenge. In this case, Fu et al. [135] use adapter modules, which are small, plug-in neural networks that can be optimized separately from the main model, to achieve parameter-efficient transfer learning. However, the current adapter tuning techniques for RecSys fall slightly behind full-model fine-tuning when it comes to cross-platform image recommendation. The exploration of adapter tuning effects for multi-modal (i.e., both text and image) RecSys is a potential future direction. In addition,
13
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
given that most typical adapter tuning does not help to speed up the training process in practice, it is important to explore effective optimization techniques to reduce the computational cost and time for RecSys through end-to-end training.
# 6.6 Data Augmentation | 2307.02046#75 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 75 | Catasta, Jason Wei, Kathy Meier-Hellstern, Dou- glas Eck, Jeff Dean, Slav Petrov, and Noah Fiedel. 2022. PaLM: Scaling language modeling with pathways.
Peter Clark, Oyvind Tafjord, and Kyle Richard- son. 2020. Transformers as soft reasoners over language. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intel- ligence, IJCAI 2020, pages 3882â3890. ijcai.org.
Ben Coleman and Kevin Hartshorn. 2012. Game, set, math. Mathematics Magazine, 85(2):83â96.
Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, and Alexandre Défossez. 2023. Simple and control- lable music generation.
Ishita Dasgupta, Andrew K. Lampinen, Stephanie C. Y. Chan, Antonia Creswell, Dharshan Ku- maran, James L. McClelland, and Felix Hill. 2022. Language models show human-like con- tent effects on reasoning.
Benjamin Lent Davis and Diane Maclagan. 2003. The card game SET. The Mathematical Intelli- gencer, 25:33â40. | 2307.02477#75 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 76 | # 6.6 Data Augmentation
Most conventional studies in the recommender systems domain rely on real data-driven research, founded on the collection of user behavior data via user interac- tion in digital platforms or through the recruitment of annotators. Nonetheless, these approaches appear to be resource-intensive and may not be sustainable in the long term. The quality and variety of the input data directly influence the performance and versatility of the models. With the aim to overcome the shortcomings of real data- centric studies, Wang et al. [136] introduce RecAgent, a simulation paradigm for recommender systems based on LLMs, which includes a user module for browsing and communication on the social media, and a recommender module for providing search or recommendation lists. Additionally, LLM-Rec [96] incorporates four prompting strategies to improve personalized content recommendations, which demonstrates through experiments that diverse prompts and input augmentation techniques can enhance recommendation performance. Therefore, rather than solely deploying LLMs as recommender systems, utilizing them for data augmentation to bolster recommendations emerges as a promising strategy in the future. | 2307.02046#76 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 76 | Benjamin Lent Davis and Diane Maclagan. 2003. The card game SET. The Mathematical Intelli- gencer, 25:33â40.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language understanding. In Proceedings of the 2019 Con- ference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long and Short Papers), pages 4171â4186, Minneapo- lis, Minnesota. Association for Computational Linguistics.
Sifatkaur Dhingra, Manmeet Singh, Vaisakh SB, Neetiraj Malviya, and Sukhpal Singh Gill. 2023. Mind meets machine: Unravelling GPT-4âs cog- nitive psychology.
Jesse Dodge, Maarten Sap, Ana Marasovi´c, William Agnew, Gabriel Ilharco, Dirk Groen- eveld, Margaret Mitchell, and Matt Gardner. 2021. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Proceedings of the 2021 Conference on Em- pirical Methods in Natural Language Process- ing, pages 1286â1305, Online and Punta Cana, Dominican Republic. Association for Computa- tional Linguistics. | 2307.02477#76 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 77 | 7 CONCLUSION As one of the most advanced AI techniques, LLMs have achieved great success in various applications, such as molecule discovery and finance, owing to their remarkable abilities in language understanding and generation, powerful generalization and reasoning skills, and prompt-adaptation to new tasks and diverse domains. Similarly, increasing efforts have been made to revolutionize recommender systems with LLMs, so as to provide high-quality and person- alized suggestion services. Given the rapid evolution of this research topic in recommender systems, there is a pressing need for a systematic overview that comprehensively summarizes the existing LLM-empowered recommender systems. To fill the gap, in this survey, we have provided a comprehensive overview of LLM-empowered RecSys from pre-training&fine-tuning and prompting paradigms, so as to provide researchers and practitioners in relevant fields with an in-depth understanding. Nevertheless, the current research on LLMs for RecSys is still in its early stage which calls for more systematic and comprehensive studies of LLMs in this field. Therefore, we also discussed some potential future directions in this field.
# REFERENCES | 2307.02046#77 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 77 | Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. 2023. Improv- ing factuality and reasoning in language mod- els through multiagent debate. ArXiv preprint, abs/2305.14325.
Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lor- raine Li, Liwei Jiang, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D. Hwang, Soumya Sanyal, Sean Welleck, Xiang Ren, Allyson Ettinger, Zaid Harchaoui, and Yejin Choi. 2023. Faith and fate: Limits of transformers on compositionality.
Bradley Efron and Robert J. Tibshirani. 1993. An Introduction to the Bootstrap. Number 57 in Monographs on Statistics and Applied Probabil- ity. Chapman & Hall/CRC, Boca Raton, Florida, USA.
Jacob Eisenstein. 2022. Informativeness and in- variance: Two perspectives on spurious correla- tions in natural language. In Proceedings of the 2022 Conference of the North American Chap- ter of the Association for Computational Lin- guistics: Human Language Technologies, pages 4326â4331, Seattle, United States. Association for Computational Linguistics. | 2307.02477#77 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 78 | # REFERENCES
[1] W. Fan, Y. Ma, Q. Li, J. Wang, G. Cai, J. Tang, and D. Yin, âA graph neural network framework for social recommendations,â IEEE Transactions on Knowledge and Data Engineering, 2020. X. Chen, W. Fan, J. Chen, H. Liu, Z. Liu, Z. Zhang, and Q. Li, âFairly adaptive negative sampling for recommendations,â in Proceedings of the ACM Web Conference 2023, 2023, pp. 3723â3733.
[2]
[3]
[4] | 2307.02046#78 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 78 | Yanai Elazar, Shauli Ravfogel, Alon Jacovi, and Yoav Goldberg. 2021. Amnesic probing: Behav- ioral explanation with amnesic counterfactuals. Transactions of the Association for Computa- tional Linguistics, 9:160â175.
Allyson Ettinger. 2020. What BERT is not: Lessons from a new suite of psycholinguistic diagnostics for language models. Transactions of the Association for Computational Linguistics, 8:34â48.
Jörg Frohberg and Frank Binder. 2022. CRASS: A novel data set and benchmark to test coun- terfactual reasoning of large language models. In Proceedings of the Thirteenth Language Re- sources and Evaluation Conference, pages 2126â 2140, Marseille, France. European Language Resources Association.
Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Ja- son Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The Pile: An 800GB dataset of diverse
text for language modeling. ArXiv preprint, abs/2101.00027. | 2307.02477#78 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 79 | [2]
[3]
[4]
Y. Gao, T. Sheng, Y. Xiang, Y. Xiong, H. Wang, and J. Zhang, âChat-rec: Towards interactive and explainable llms-augmented recommender system,â arXiv preprint arXiv:2303.14524, 2023. J. Chen, L. Ma, X. Li, N. Thakurdesai, J. Xu, J. H. Cho, K. Nag, E. Korpeoglu, S. Kumar, and K. Achan, âKnowledge graph completion models are few-shot learners: An empirical study of relation labeling in e-commerce with llms,â arXiv preprint arXiv:2305.09858, 2023. | 2307.02046#79 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 79 | text for language modeling. ArXiv preprint, abs/2101.00027.
Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi, Dheeru Dua, Yanai Elazar, Ananth Gottumukkala, Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco, Daniel Khashabi, Kevin Lin, Jiangming Liu, Nelson F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer Singh, Noah A. Smith, Sanjay Subramanian, Reut Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou. 2020. Eval- uating modelsâ local decision boundaries via In Findings of the Association contrast sets. for Computational Linguistics: EMNLP 2020, pages 1307â1323, Online. Association for Com- putational Linguistics.
Atticus Geiger, Hanson Lu, Thomas Icard, and Christopher Potts. 2021. Causal abstractions of neural networks. In Advances in Neural Informa- tion Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 9574â9586. | 2307.02477#79 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 80 | [5] W. Fan, X. Zhao, X. Chen, J. Su, J. Gao, L. Wang, Q. Liu, Y. Wang, H. Xu, L. Chen et al., âA comprehensive survey on trustworthy recommender systems,â arXiv preprint arXiv:2209.10117, 2022. X. He, K. Deng, X. Wang, Y. Li, Y. Zhang, and M. Wang, âLightgcn: Simplifying and powering graph convolution network for recommendation,â in Proceedings of the 43rd International ACM SIGIR conference on research and development in Information Retrieval, 2020, pp. 639â648.
[6] | 2307.02046#80 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 80 | Atticus Geiger, Zhengxuan Wu, Hanson Lu, Josh Rozner, Elisa Kreiss, Thomas Icard, Noah D. Goodman, and Christopher Potts. 2022. Induc- ing causal structure for interpretable neural net- works. In International Conference on Machine Learning, ICML 2022, 17-23 July 2022, Balti- more, Maryland, USA, volume 162 of Proceed- ings of Machine Learning Research, pages 7324â 7338. PMLR.
Tobias Gerstenberg, Noah D. Goodman, David A. Lagnado, and Joshua B. Tenenbaum. 2021. A counterfactual simulation model of causal judg- ments for physical events. Psychological review.
Tobias Gerstenberg, Matthew Peterson, Noah D. Goodman, David A. Lagnado, and Joshua B. Tenenbaum. 2017. Eye-tracking causality. Psy- chological Science, 28:1731 â 1744.
Biyang Guo, Xin Zhang, Ziyuan Wang, Minqi Jiang, Jinran Nie, Yuxuan Ding, Jianwei Yue, and Yupeng Wu. 2023. How close is ChatGPT to human experts? Comparison corpus, evalua- tion, and detection.
Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy | 2307.02477#80 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 81 | [6]
[7] W. Fan, T. Derr, Y. Ma, J. Wang, J. Tang, and Q. Li, âDeep adversarial social recommendation,â in 28th International Joint Conference on Artificial Intelligence (IJCAI-19). International Joint Conferences on Artificial Intelligence, 2019, pp. 1351â1357. L. Zheng, V. Noroozi, and P. S. Yu, âJoint deep modeling of users and items using reviews for recommendation,â in Proceedings of the tenth ACM international conference on web search and data mining, 2017, pp. 425â434. S. Zhang, L. Yao, A. Sun, and Y. Tay, âDeep learning based recommender system: A survey and new perspectives,â ACM computing surveys (CSUR), vol. 52, no. 1, pp. 1â38, 2019.
[8]
[9]
[10] W. Fan, C. Liu, Y. Liu, J. Li, H. Li, H. Liu, J. Tang, and Q. Li, âGenerative diffusion models on graphs: Methods and applications,â arXiv preprint arXiv:2302.02591, 2023. | 2307.02046#81 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 81 | Simeng Han, Hailey Schoelkopf, Yilun Zhao, Zhenting Qi, Martin Riddell, Luke Benson, Lucy
Sun, Ekaterina Zubova, Yujie Qiao, Matthew Burtell, David Peng, Jonathan Fan, Yixin Liu, Brian Wong, Malcolm Sailor, Ansong Ni, Liny- ong Nan, Jungo Kasai, Tao Yu, Rui Zhang, Shafiq Joty, Alexander R. Fabbri, Wojciech Kryscinski, Xi Victoria Lin, Caiming Xiong, and Dragomir Radev. 2022. FOLIO: Natural language reasoning with first-order logic.
Irene Heim and Angelika Kratzer. 1998. Semantics in Generative Grammar. Blackwell.
John Hewitt and Percy Liang. 2019. Designing and interpreting probes with control tasks. In Pro- ceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Nat- ural Language Processing (EMNLP-IJCNLP), pages 2733â2743, Hong Kong, China. Associa- tion for Computational Linguistics. | 2307.02477#81 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 82 | [11] B. Hidasi, A. Karatzoglou, L. Baltrunas, and D. Tikk, âSession- based recommendations with recurrent neural networks,â arXiv preprint arXiv:1511.06939, 2015.
[12] W. Fan, Y. Ma, D. Yin, J. Wang, J. Tang, and Q. Li, âDeep social collaborative filtering,â in Proceedings of the 13th ACM Conference on Recommender Systems, 2019, pp. 305â313.
[13] W. Fan, Y. Ma, Q. Li, Y. He, E. Zhao, J. Tang, and D. Yin, âGraph neural networks for social recommendation,â in The world wide web conference, 2019, pp. 417â426.
[14] Z. Qiu, X. Wu, J. Gao, and W. Fan, âU-bert: Pre-training user representations for improved recommendation,â in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, no. 5, 2021, pp. 4320â4327. | 2307.02046#82 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 82 | Jordan Hoffmann, Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de Las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katie Millican, George van den Driessche, Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich Elsen, Jack W. Rae, Oriol Vinyals, and Laurent Sifre. 2022. Training compute-optimal large language models.
Jennifer Hu and Roger Levy. 2023. Prompt-based methods may underestimate large language mod- elsâ linguistic generalizations. ArXiv preprint, abs/2305.13264.
Cheng-Zhi Anna Huang, Tim Cooijmans, Adam Roberts, Aaron C. Courville, and Douglas Eck. 2019a. Counterpoint by convolution. ArXiv preprint, abs/1903.07227.
Cheng-Zhi Anna Huang, Curtis Hawthorne, Adam Roberts, Monica Dinculescu, James Wexler, Leon Hong, and Jacob Howcroft. 2019b. The bach doodle: Approachable music composition with machine learning at scale. | 2307.02477#82 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02046 | 83 | [15] T. Brown, B. Mann, N. Ryder, M. Subbiah, J. D. Kaplan, P. Dhariwal, A. Neelakantan, P. Shyam, G. Sastry, A. Askell et al., âLanguage models are few-shot learners,â NeurIPS, 2020. [16] L. Zhou, H. Palangi, L. Zhang, H. Hu, J. Corso, and J. Gao, âUnified vision-language pre-training for image captioning and vqa,â in Proceedings of the AAAI conference on artificial intelligence, vol. 34, no. 07, 2020, pp. 13 041â13 049. J. Li, Y. Liu, W. Fan, X.-Y. Wei, H. Liu, J. Tang, and Q. Li, âEmpowering molecule discovery for molecule-caption translation with large language models: A chatgpt perspective,â arXiv preprint arXiv:2306.06615, 2023.
[17] | 2307.02046#83 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.