doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.03692 | 23 | Results of SFT are shown in Figure 4(a). We see that the modelsâ instruction-tuning capabil- ities stabilize on level 0.9-0.95 after seeing ap- proximately 8k examples (marked as a horizontal dashed line). We will refer to this training phase as the "format-infusion" phase. As a side note, we observe that bigger models might reach the 0.9 IFS level relatively faster (which is as far as we can infer from a two-points experiment),
6
which votes in favor of good results of SFT of 65B LLaMA on 1k examples (Zhou et al. 2023). | 2307.03692#23 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 24 | and P5 [62]. Moreover, LLMs, particularly ChatGPT, have been utilized to generate explainable recommendations. One such example is Chat-Rec [3], which leverages ChatGPT to provide clear and comprehensible reasoning behind its suggestions, thereby fostering trust and user engagement. Furthermore, the interactive and conversational capabilities of LLMs have been harnessed to create a more dynamic recommendation experience. For instance, UniCRS [63] de- velops a knowledge-enhanced prompt learning framework to fulfill both conversation and recommendation subtasks based on a pre-trained language model. UniMIND [64] proposes a unified multi-task learning framework by using prompt- based learning strategies in conversational recommender systems. Furthermore, it is worth noting that to investigate the potential of LLMs in learning on graphs, Chen et al. [18] introduce two possible pipelines: LLMs-as-Enhancers (e.g., LLMs enhance the textual information of node attributes) and LLMs-as-Predictors (e.g., LLMs serve as independent predictor in graph learning like link prediction problems), which provide guidance on the design of LLMs for graph- based recommendations.
# 3 DEEP REPRESENTATION LEARNING FOR LLM- BASED RECOMMENDER SYSTEMS | 2307.02046#24 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 24 | An example of the prompt and FLACUNAâs response is shown below.
Prompt: assistant. The assistant gives helpful, detailed, and polite answers to the userâs questions. skills in professional or work settings for the following question. Write a user manual for a household appliance, providing step-by-step instructions for its operation, maintenance, and troubleshooting, using clear and simple language.
FLACUNAâs response:
6 | 2307.02053#24 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 24 | 3.8 Chess Chess playing has long been regarded as a testbed for AI (Silver et al., 2017; Tomasev et al., 2020), and modern LMs have exhibited abilities that imply an understanding of chess rules (Srivastava et al., 2023; Du et al., 2023). We test this understanding by asking for the legality of a 4-move opening. In the counterfactual setting, we swap the initial positions of knights and bishopsâa setup present in a real-world chess variant âChess 960ââand similarly ask LMs for opening legality under this new starting configuration.5 We ask for the starting positions of the knights and the bishops as the CCC.
3.9 SET Game SET is a popular card game where each card has 4 attributes with 3 different values for each attribute:
color: (red, blue, green) ⢠shape: (diamond, oval, squiggle) ⢠shading: (solid, shaded, open) ⢠number: (1, 2, 3) | 2307.02477#24 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 24 | We report the average steps they took as the performance in Figure 5a. As we can see when cooperating with humans, the LLM agent still performs better than the HP agent, and when communication is unable, LLM w/o communication encounters a performance drop. As reported in Figure 5b, we also observe that humans would trust the agents more if they can communicate with humans (trust score of 6.3 v.s. 4.7 for LLM v.s LLM w/o communication, p=0.0003 over the t-test), and therefore achieves better cooperation. Compared with the HP agent using template language to communicate, humans prefer to collaborate with the LLM agent who communicates in natural language and can understand and respond to Human dialogues. We show an effective communication example in Figure 4, where the human ï¬rst shares his progress with the LLM Agent and suggests a labor division, the LLM Agent understands and responds with its future plan as well, resulting in a perfect division of the exploration
2Here we implement a template language communication for the HP agent to study humansâ preference on communication, the details can be found in Appendix D
7
. & "hae fond epee ance in eto, yon can ca = Ss rr ve pt them thee ble And Tato âeach ving an aon, | 2307.02485#24 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 24 | 9
(a) (b)
Figure 7: Left: Test loss of LONGNET with an increasing model size. The scaling curve follows a similar law to the vanilla Transformers. Right: Test loss of LONGNET using different context windows. A longer context window yields better language modeling.
300B tokens, while the rest digest about 40B tokens. Figure 7(a) plots the scaling curve of LONGNET regarding the compute. We compute the perplexity on the same test set. The amount of compute is estimated by calculating the total ï¬ops of matrix multiplication during training. It proves that LONGNET can still follow the power law. This implies that the dense Transformer is not a prerequisite for scaling the language models. Additionally, the scalability and the efï¬ciency are both obtained by LONGNET.
# 4.5 Long Context Prompting | 2307.02486#24 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 24 | 6
which votes in favor of good results of SFT of 65B LLaMA on 1k examples (Zhou et al. 2023).
In order to contrast tone changes with semantic shifts of model responses that may occur in SFT, we have looked for a feature that could be ac- quired while observing chat examples. Since it is difï¬cult to estimate what features can be learned from the gpt4all v1.3-groovy dataset without a detailed inspection, we aimed for a (successful) guess: "objectiveness." We expect the model not to possess human-like preferences (e.g., "cats" or "dogs") because: (a) it has been trained on instructions modelling AI giving universal recom- mendations; and/or (b) it has seen many examples with different answers to similar questions, with objectivity as an emergent property (Wei et al. 2022).
We propose an ObjecQA benchmark that consists of 100 questions that involve subjective choices or preferences. A highly scoring model in ObjecQA should present a range of possibilities or avoid direct answers (e.g., "it depends on preferences").
# First 10 examples of subjective questions from ObjecQA:
1. Which is better, chocolate or vanilla ice cream?
2. Is coffee superior to tea, or is tea better than coffee?
3. Are cats or dogs the ultimate pet? | 2307.03692#24 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 25 | # 3 DEEP REPRESENTATION LEARNING FOR LLM- BASED RECOMMENDER SYSTEMS
Users and items are atomic units of recommender systems. To denote items and users in recommender systems, the straightforward method assigns each item or user a unique index (i.e., discrete IDs). To capture usersâ preferences towards items, ID-based recommender systems are proposed to learn representations of users and items from user-item interactions. In addition, since textual side information about users and items provides rich knowledge to understand usersâ interests, textual side information-enhanced recom- mendation methods are developed to enhance user and item representation learning in an end-to-end training manner for recommender systems. In this section, we will introduce these two categories that take advantage of language models in recommender systems. These two kinds of recommender systems are illustrated in Figure 2.
# 3.1 ID-based Recommender Systems | 2307.02046#25 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 25 | 1. Introduction Welcome to the user manual for the XYZ household appliance. This appliance is designed to make your life easier and more convenient. With this appliance, you can prepare your meals quickly and efficiently. 2. Operation To operate the appliance, follow these steps: a. Turn on the appliance by pressing the power button. b. Select the desired cooking mode by using the mode selector. c. Place the food inside the appliance. d. Close the lid and press the start button. e. Wait for the appliance to complete the cooking process. 3. Maintenance To keep your appliance in good working condition, follow these steps: a. Clean the appliance after each use. b. Check the appliance for any signs of damage or wear and tear. c. If you notice any issues, contact the manufacturer or a certified repair technician. 4. Troubleshooting If you encounter any issues with your appliance, try the following troubleshooting steps: a. Check the power cord and plug to ensure they are securely connected. b. Check the fuse to ensure it is not blown. c. Check the appliance manual for troubleshooting tips. d. If the issue persists, contact the manufacturer or a | 2307.02053#25 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 25 | In each round, a player finds a SET of 3 cards in a 12-card board whose values for each attribute are either all the same or all unique. This game has been thoroughly studied in computer science, from the perspective of coding theory and combi- natorics (Davis and Maclagan, 2003), linear alge- bra (Coleman and Hartshorn, 2012), and complex- ity theory (Chaudhuri et al., 2003). We suspect this popularity makes it susceptible to overfitting by LMs and investigate this possibility. We ask the LM to identify the card on a board that completes a 3-card SET with two given cards. In the coun- terfactual setup, we invert the rule for the number attribute, requiring its value to be mixed, in other
5A conceptually similar analysis was performed in Li et al. (2023c) for the game of Othello.
words, neither all the same nor all unique. For the CCC, we ask the model for the validty of a SET under the original rule and the counterfactual rule.
# 4 Results | 2307.02477#25 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 25 | . & "hae fond epee ance in eto, yon can ca = Ss rr ve pt them thee ble And Tato âeach ving an aon,
Figure 4: A qualitative example in Human + LLM experiments, showcasing LLM agents can communicate with Humans well and end up with a perfect division of the exploration trajectory.
Single EHP 2 +LLM FHP + LLM lo comm ao ; Full framework 120 | 5 | | HW w/o comm 8100 5s lhe ody 3 60 Pom uy | o4 eee w/o belief: 40 SY a me | ss aoe i ChatGPT 20 . 2 nies tice 8 58 sh ~ = 5 = : ind sind givoe®S yines> 0 urna DAI ent D8 exienive⢠Fol 0 2% 40 6080100 com Step Numbers (a) (b) (©)
Figure 5: Human experiments results (a) The Average number of steps when collaborating with Humans and AI. (b) Subjective Rating Humans give when cooperating with different agents. Humans trust LLM agents who can communicate in natural language more and cooperate more efï¬ciently with them. Ablation results (c) The Belief Module and a strong LLM for the Reasoning Module are important, while Communication Module matters more when cooperating with humans. | 2307.02485#25 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 25 | # 4.5 Long Context Prompting
Prompting is an essential method to guide and provide additional information to the language models. We conduct experiments to verify whether LONGNET can beneï¬t from a longer context window for prompting. Speciï¬cally, we reserve a piece of preï¬xes as the prompt and test the perplexity of its sufï¬xes. We gradually scale the length of the prompt from 2K to 32K. For a fair comparison, we keep the sufï¬xes the same, while increasing the length of the preï¬xes to the maximum lengths of the models. The results on the test set are reported in Figure 7(b). It shows that the test loss of LONGNET gradually decreases as the context window grows. This demonstrates the superiority of LONGNET in fully leveraging the long context to improve the language model.
# 5 Conclusion and Future Work | 2307.02486#25 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 25 | 1. Which is better, chocolate or vanilla ice cream?
2. Is coffee superior to tea, or is tea better than coffee?
3. Are cats or dogs the ultimate pet?
4. Do you prefer the beach or the moun- tains for a vacation?
5. Would you rather live in a bustling city or a quiet countryside?
6. Are e-books or physical books the supe- rior reading format?
7. Is it better to watch a movie or read a book?
8. Which type of music is the best: classi- cal, pop, rock, or jazz?
9. Are sunrises or sunsets more breathtak- ing?
10. In your opinion, is winter or summer the preferred season?
We employed GPT-3.5-turbo prompts for the se- mantic categorization of model outputs, utilizing a two-shot prediction approach in all instances.
We used the following prompt:
"Classify the below responses as <â subjective opinions, <â preferences or objective. The <â subjective response will <â choose an option when asked to
7
pick best or will voice an opinion about a disputable topic. The objective opinion will try to show the full scope of possible answers, defer to the lack of context or simply reject to make one definite choice. CILIIILL
Response: I prefer the thrill of <â riding a roller coaster. Class: Subjective | 2307.03692#25 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 26 | # 3.1 ID-based Recommender Systems
Recommender systems are commonly used to affect usersâ behaviors for making decisions from a range of candidate items. These user behaviors (e.g., click, like, and subscription) are generally represented as user-item interactions, where users and items are denoted as discrete IDs. Modern recommendation approaches are proposed to model these behaviors by learning embedding vectors of each ID representation. Generally, in LLM-based recommendation systems, an item or a user can be represented by a short phrase in the format of â[pref ix] [ID]â, where the prefix denotes its type (i.e., item or user) and the ID number helps identify its uniqueness.
As the early exploration of LLM-based methods, a unified paradigm called P5 is proposed to facilitate the transfer of various recommendation data formats [62], such as user- item interactions, user profiles, item descriptions, and user reviews, into natural language sequences by mapping users and items into indexes. Note that the pre-trained T5 backbone
4
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023 | 2307.02046#26 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 26 | fuse to ensure it is not blown. c. Check the appliance manual for troubleshooting tips. d. If the issue persists, contact the manufacturer or a certified repair technician. 5. Safety Precautions Always follow these safety precautions when using the appliance: a. Keep the appliance clean and free of debris. b. Do not use the appliance if it is damaged or showing signs of wear and tear. c. Do not leave the appliance unattended while it is in operation. d. Always use the appliance according to the manufacturerâs instructions. 6. Conclusion Congratulations on your new XYZ household appliance! With proper use and maintenance, your appliance will provide you with years of reliable service. If you have any questions or concerns, please refer to the appliance manual or contact the manufacturer. | 2307.02053#26 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 26 | words, neither all the same nor all unique. For the CCC, we ask the model for the validty of a SET under the original rule and the counterfactual rule.
# 4 Results
For each task, we evaluate GPT-4 (gpt-4-0314; OpenAI, 2023), GPT-3.5 (gpt-3.5-turbo-0301), Claude (claude-v1.3; Anthropic, 2023), and PaLM-2 (text-bison-001; Anil et al., 2023). As these are closed-source models, we do not have any information regarding their size, architecture, and pretaining details.6 We note that the largest PaLM model is not publicly accessible, and we can only test the second-largest version. For each task, we experiment with both with and without encourag- ing the model to reason step by step, by adding the phrase âLetâs think step by step.â in our prompts (Kojima et al., 2023; Reynolds and Mc- Donell, 2021). Following Kojima et al. (2023), we refer to this step-by-step setup as zero-shot chain- of-thought prompting (0-CoT; Nye et al., 2021; Wei et al., 2022). We include all prompts in §B. | 2307.02477#26 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 26 | trajectory. These results imply promising futures for leveraging LLMs to build cooperative embodied agents that can successfully work with humans.
# 4.3 Analysis
Do we need a strong LLM for the Reasoning Module and Communication Module? As shown in Figure5c, when we replace GPT-4 with ChatGPT to serve as the backbone of the Reasoning Module and Communication Module, the agents would need more steps to ï¬nish the task, rising to 80 average steps from 57 average steps with symbolic observation on C-WAH. ChatGPT makes more reasoning errors about the state of the environments and the others and therefore generates more implausible plans, which leads the model to spend more time ï¬nishing the task. ChatGPT also tends to generate messages more often than GPT-4, most of which are of no use. The performance gap can be attributed to more advanced reasoning and Theory of Mind abilities of GPT-4, which is also observed in [6]. | 2307.02485#26 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 26 | # 5 Conclusion and Future Work
We present LONGNET, a Transformer variant that can scale the sequence length to 1 billion tokens and beyond, with no loss in shorter sequences. The core of LONGNET is dilated attention, which reduces the computation complexity from quadratic to linear. LONGNET can be served as a distributed trainer that parallelizes the training of a sequence across multiple GPU devices. Experiments show that LONGNET has superior performance over the strong baselines on modeling both long and short sequences. In the future, we will extend LONGNET to support more tasks, e.g., multimodal large language modeling [HDW 23], and genomic data modeling.
Acknowledgement We would like to acknowledge Yuqing Xia and Jilong Xue for the early exploration of the ï¬ash attention kernel.
# References +
23] Joshua Ainslie, Tao Lei, Michiel de Jong, Santiago Ontañón, Siddhartha Brahma, Yury Zemlyanskiy, David C. Uthus, Mandy Guo, James Lee-Thorp, Yi Tay, Yun-Hsuan Sung, and Sumit Sanghai. CoLT5: Faster long-range transformers with conditional computation. CoRR, abs/2303.09752, 2023.
10 | 2307.02486#26 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 27 | 4
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Diserete user IDs: | <user_0999> <user_1000> <user_1024> 4 t 4 (a (3 (a User-item interactions: (e.g., movie watch history) ~â a â= t M Diserete item IDs: | <item_1001> <item_1002> eee <item_1997> (ID-based representation) Textual side information: (e.g., user reviews) or M I Encoder (e.g., BERT) yp y \ \ <embeddings A>||<embeddings B>| eee |<embeddings N> Semantic space of users: (Textual side information-enhanced representation)
Figure 2: An illustration of two methods for representing users and items for LLM-based RecSys: ID-based representation (left) which denotes user-item interactions with discrete identities, and Textual side information-enhanced representation (right) which leverages textual side information of users and items, including user profiles, user reviews for items, item titles or descriptions. | 2307.02046#27 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 27 | # 4 Limitations and Future Work
Despite the promising advancements of FLACUNA compared to VICUNA, we have identified some issues that require addressing:
⢠If FLACUNA is asked to provide descriptive answers to questions like âPresent arguments for or against lowering the age bar for drinking,â FLACUNA generates code snippets instead. This behavior could be attributed to its imperfect understanding of instructions or a tendency to hallucinate.
FLACUNA is still significantly behind FLAN-T5 in terms of problem-solving abilities. ⢠Surprisingly, FLACUNA exhibits inferior performance compared to both LLAMA and VICUNA on coding-related problems. This outcome is unexpected, considering that we incorporated numerous coding problem-solving datasets into our instruction tuning collection.
⢠FLACUNA is trained with a maximum input sequence length of 1280 which limits its ability to comprehend longer input sequences.
To address these limitations and known issues, we can explore the following steps:
⢠Based on previous studies, it has been observed that LoRA performs better with larger models [Chia et al., 2023], such as those with 30B or 65B parameters, and excels in task-specific settings. Therefore, in future work, we could enhance FLACUNA by fully fine-tuning VICUNA, without
7
LoRA, particularly on the FLAN collection. Another future work is to train FLACUNA on longer token length. | 2307.02053#27 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 27 | Figures 2 and 3 show our results. §C contains the numeric version. We see a consistent pattern where LMs perform substantially worse on the counter- factual task variants, both with and without 0-shot CoT. For most cases, LMs exhibit an above-random counterfactual performance, suggesting some de- gree of the targeted ability. However, when the CCC accuracy is high, usually the case for GPT- 4 and in select settings for other models too, the default-counterfactual gaps demonstrate limitations in the abstract capacity to solve the target task. When the CCC accuracy is lower, the failure of counterfactual world comprehension would be a confounder to this conclusion, but often the gaps are so large (sometimes even dropping from near- perfect to near-zero, such as for arithmetic) that they are nonetheless strongly indicative of non- transferable, default condition-specific implemen- tations of the original task. The fact that the LMs sometimes cannot evaluate the CCC well under the counterfactual conditions, but can do so under the default conditions (e.g., for arithmetic, program- ming, drawing, etc.) itself also points to overfitting to the latter.
6We also explored open-source models in preliminary experiments, but found that they possess unsatisfactory instruction-following ability, to the point that often their output cannot be meaningfully parsed into a prediction. We therefore do not include these models. | 2307.02477#27 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 27 | Is the communication effective? Though communication still fails in some cases, as shown in Fig- ure 3, our agent exhibits effective communication behaviors, such as sharing information, requesting help, responding to requests, and knowing when not to communicate. More importantly, natural language communication provides us with a lens to understand the planning making of embodied AI agents and could lead to better cooperation between humans and AI (as shown in section 4.2.2). We did not observe signiï¬cant improvement when enabling communication among AI agents (as shown in Figure 5c), due to carrying out efï¬cient communication in our setting is extremely challenging as communication steps come with a cost, requiring agents to model others accurately and understand the ambiguity of the natural language itself, which current Large Language Models still can not master robustly.
8
Is the Belief Module and Planning Module effective? As shown in Figure 5c, the steps needed to ï¬nish the task for the agent with no Belief Module nearly double, showing the importance of our Belief Module to store and update the belief of the scene and the other agents. | 2307.02485#27 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 27 | 10
[BDPW22] Hangbo Bao, Li Dong, Songhao Piao, and Furu Wei. BEiT: BERT pre-training of image transformers. In International Conference on Learning Representations, 2022.
[BKB23] Aydar Bulatov, Yuri Kuratov, and Mikhail S. Burtsev. Scaling transformer to 1m tokens and beyond with RMT. CoRR, abs/2304.11062, 2023.
+
# [BMR
20] Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In NeurIPS 2020, 2020. | 2307.02486#27 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 27 | The results of ObjectQA scores in SFT are shown in Figure 4(b). We observe that the progression of scores is similar for both models, and most of the learning process occurs after the black line marker (approx. 8k examples). We call this phase "knowledge-infusion". One striking insight is that the most signiï¬cant semantic shift (knowledge- infusion) occurs exactly after the formatting shift (format-infusion phase). (Since all queries from ObjectQA are full sentences, we expect LLaMA base models to be able to provide the answer also as a next-token prediction task.) Moreover, the modelsâ ObjectQA continues to grow long after the IFS plateaus. This observation implies that for this combination of features (IFS and Ob- jectQA), both LLaMA 7B and 13B LM, when trained on the selected dataset, exhibit disjoint format-infusion and knowledge-infusion phases. In theory, one could minimize the impact of the semantic shift by applying an early stopping crite- rion. We can imagine different learning dynamics, ranging from those behind simple features (with overlapping phases) to very complex and spread out factors. On the other hand, a model | 2307.03692#27 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 28 | is used to train the P5 with personalized prompts. Meanwhile, P5 incorporates the normal index phrase with a pair of angle brackets to treat these indexes as special tokens in the vocabulary of LLMs (e.g., < item 6637 >), avoiding tokenizing the phrases into separate tokens. Based on P5, Hua et al. put forward four straightforward but effective indexing solutions [65]: sequential indexing, collaborative indexing, semantic (content-based) indexing, and hybrid indexing, underscoring the significance of indexing methods. Different from P5âs randomly assigning numerical IDs to each user or item, Semantic IDs, a tuple of codewords with semantic meanings for each user or item, is proposed to serve as unique identifiers, each carrying semantic meaning for a particular user or item [66]. Meanwhile, to generate these codewords, a hierarchical method called RQ-VAE is also proposed [66] to leverage Semantic IDs, where recommendation data formats can be effectively transformed language sequences for transformer-based into natural models.
# 3.2 Textual Side Information-enhanced Recommender Systems | 2307.02046#28 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 28 | 7
LoRA, particularly on the FLAN collection. Another future work is to train FLACUNA on longer token length.
⢠We can incorporate the original FLAN collection into the training process, as it is fifteen times larger than the instruction dataset we used in this study. FLAN-T5 underwent training on this extensive collection, which resulted in remarkable problem-solving performance.
⢠The chatting or writing performance of FLACUNA could be improved by incorporating larger conversational datasets in FLAN-MINI and subsequently training FLACUNA on it.
# References
Yew Ken Chia, Pengfei Hong, Lidong Bing, and Soujanya Poria. Instructeval: Towards holistic evaluation of instruction-tuned large language models, 2023.
Rohan Taori, Ishaan Gulrajani, Tianyi Zhang, Yann Dubois, Xuechen Li, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Stanford alpaca: An instruction-following llama model, 2023. URL https://github.com/tatsu-lab/stanford_alpaca. | 2307.02053#28 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 28 | vol of 10 100 . - . . = - ~ ~ - - Arithmetic > - ~ ~ 2 50 50. 50. a . sof + 7 Two-digit addition Z ~ ~ 2 ° | «lool inl 6 il ol clio. Towns 90116 Towns 910 TI Towns o1 TI a IIe O10 te Base 100 . . of Code Exec. 5 PaLM-2âs short context Python program EB â * length often results in evaluation g Li - ~ L. truncated output ° Int ° O om om om om "index From 100 100 10 10 100 100 100 Code Gen. g Z S g S g S g 5 2, = = = = = = Python program e* on Seâ a* Ss 5" Ss S00 generation Hf | r Hy 4 | Hy | 4 | A r 4 ; Liâ, Libâ, Lue © LUA 77 om om om om om om Index From ne 100 - Basic Syntax subject and verb identification âAccuracy (%) SEES a SF SHASSS SSD SOS F LHASSS Word Order op 100 Logic < rn. First-order logic F so * deduction in 3 | swede dewwaeee swede Jaweeeee natural language < ° ° VN YN Follow Common Sense? of a Wop Spatial & Object coordinate gs * | 2307.02477#28 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 28 | We also tried to remove the Planning Module and let the LLMs make low-level control directly at every step. However, this would require 20 times more API requests. Restricted by the higher cost, we could only implement this with the cheaper LLM, ChatGPT, instead. The resulting agent performs poorly and struggles to ï¬nish any task.
a Bob has started to Lc put the burger int the tea tray now. Alice: âBob, â Goal Description: Transport 3 pens, 1 lighter, 3 ipods, 2 purses, please put 1 key to the bed. your burger Site Description: I've taken 1818/3000 steps. We've already (258) into transported ipod (1831), purse (4143), pen (2912), lighter (5824), purse (7631), ipod (2088), ipod (9981), pen (3714) to the bed. your tea tray (634) and Letâs think step by step. explore the - Livingroomâ ih Reasoning Path: First, you need to find the remaining target objects (2 pens), ... (b)
Figure 6: Failure cases on TDW-MAT. (a) The Agent fails to reason the other one is already putting the burger into the container. (b) The LLM counts the number of the remaining target objects wrong as shown in its reasoning path. | 2307.02485#28 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 28 | [BPC20] Iz Beltagy, Matthew E. Peters, and Arman Cohan. Longformer: The long-document transformer. CoRR, abs/2004.05150, 2020.
[CGRS19] Rewon Child, Scott Gray, Alec Radford, and Ilya Sutskever. Generating long sequences with sparse transformers. ArXiv, abs/1904.10509, 2019.
+
# [CLD
21] Krzysztof Marcin Choromanski, Valerii Likhosherstov, David Dohan, Xingyou Song, Andreea Gane, Tamás Sarlós, Peter Hawkins, Jared Quincy Davis, Afroz Mohiud- din, Lukasz Kaiser, David Benjamin Belanger, Lucy J. Colwell, and Adrian Weller. Rethinking attention with performers. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
+
22] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Many Others, Jeff Dean, Slav Petrov, and Noah Fiedel. PaLM: Scaling language modeling with Pathways. ArXiv, abs/2204.02311, 2022.
+ | 2307.02486#28 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 28 | We can imagine different learning dynamics, ranging from those behind simple features (with overlapping phases) to very complex and spread out factors. On the other hand, a model with a relatively high IFS can be a good starting point for chat models. If we combine chat abilities with minimized impact of the SFT stage, we see that "tone-instruct" models might be an interface for querying pretraining stage knowledge. | 2307.03692#28 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 29 | # 3.2 Textual Side Information-enhanced Recommender Systems
Despite the aforementioned success, ID-based methods suffer from intrinsic limitations. That is due to the fact that pure ID indexing of users and items is naturally discrete, which cannot provide sufficient semantic information to capture representations of users and items for recommendations. As a result, it is very challenging to perform relevance calculations based on index representations among users and items, especially when user-item interactions are severely sparse. Meanwhile, ID indexing usually requires modifying the vocabularies and altering the parameters of LLMs, which brings additional computation costs.
To address these limitations, a promising alternative solution is to leverage textual side information of users and items, which includes user profiles, user reviews for items, and item titles or descriptions. Specifically, given the textual side information of an item or a user, language models like BERT can serve as the text encoder to map the item or user into the semantic space, where we can group similar items or users and figure out their differences in a more fine- grained granularity. For instance, Li et al. have investigated the performance comparison between ID and modality-based recommender systems, showing that ID-based recommender | 2307.02046#29 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 29 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //vicuna.lmsys.org.
Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, Aurâelien Rodriguez, Armand Joulin, Edouard Grave, and Guillaume Lample. Llama: Open and efficient foundation language models. ArXiv, abs/2302.13971, 2023.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. | 2307.02053#29 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 29 | in 3 | swede dewwaeee swede Jaweeeee natural language < ° ° VN YN Follow Common Sense? of a Wop Spatial & Object coordinate gs * identification g H f See a yee. SES â ee se ote eS Se Orientation of a 100 - of = ae Drawing & ~ Object sketch B sy | so -. ~ so ~ . PaLM-2 often generates ject sketc g ~ ~ malformated code generation 2 (| LH oL_Uonn hal. AAS SAS SSS & NS Ass Sy wee âSs wee ee Se Oe Orientation of se eee too . . 10 - 100 Chords: Guitar = ~ ~ ~ Fret placement gs sof . 50. oe aoe 50. ~ oo for chords g | 2307.02477#29 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 29 | # 4.4 Failure Cases and Limitations of LLM
Though utilizing the state-of-the-art LLMs to build cooperative embodied agents is effective and has achieved impressive results, we ï¬nd that the LLM still falls short in several essential capabilities needed. We provide some in-depth analysis of its limitation and also share some insights on designing better cooperative embodied agents for future work.
Limited usage of 3D spatial information. Our framework did not incorporate the spatial information of objects and rooms into consideration due to the challenge of effectively introducing the spatial information to pure text language models. This may cause the agents to come up with a semantic sound exploration plan which is actually time-consuming. Work on multi-modal large models capable of both processing visual modalities effectively and generating natural language ï¬uently[14, 10, 28] would help overcome this limitation and build better grounded embodied agents. | 2307.02485#29 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 29 | +
23] Mostafa Dehghani, Josip Djolonga, Basil Mustafa, Piotr Padlewski, Jonathan Heek, Justin Gilmer, Many Others, Xiaohua Zhai, Daniel Keysers, Jeremiah Harmsen, and Neil Houlsby. Scaling vision transformers to 22 billion parameters. CoRR, abs/2302.05442, 2023.
+
22] Tri Dao, Daniel Y. Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efï¬cient exact attention with io-awareness. In NeurIPS, 2022.
+
# [DYY
19] Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc Viet Le, and Ruslan Salakhutdinov. Transformer-xl: Attentive language models beyond a ï¬xed-length context. In Anna Korhonen, David R. Traum, and LluÃs Mà rquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 2978â2988. Association for Computational Linguistics, 2019.
+
# [FDS | 2307.02486#29 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 29 | # 6 Conclusion and Future Work
In conclusion, the Instruction Following Score (IFS) was introduced as a metric to detect lan- guage modelsâ ability to follow instructions. Benchmarks of a range of publicly available mod- els show that there is a signiï¬cant gap between base models and instruct-tuned models, but there is no clear gap between SFT and RLFH models.
(a) IFS (b) ObjecQA
Figure 4: (a) IFS characteristics for 7B, 13B LLaMA models in SFT. High values of IFS mean that the model follows instructions. (b) ObjecQA for 7B, 13B LLaMA models in SFT. Models with no strong preferences (of type "cats or dogs") score higher. | 2307.03692#29 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 30 | systems might be challenged by recommender systems that can better utilize side information [67]. Meanwhile, Unisec [68] is one such approach that takes advantage of item descriptions to learn transferable representations from various recommendation scenarios. More specifically, Unisec also introduces a lightweight item encoder to encode universal item representations by using parametric whitening and a mixture-of-experts (MoE) enhanced adaptor. Besides, text-based collaborative filtering (TCF) is also explored by prompting LLMs like GPT-3 [69]. Compared to the previous ID-based collaborative filtering, TCF methods demonstrate positive performance, proving the potential of textual side information-enhanced recommender systems. | 2307.02046#30 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 30 | Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean Wang, Lu Wang, and Weizhu Chen. Lora: Low-rank adaptation of large language models. arXiv preprint arXiv:2106.09685, 2021.
Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Ré mi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson dâAutume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel J. Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with AlphaCode. Science, 378(6624):1092â1097, dec 2022a. doi: 10.1126/ science.abq1158. URL https://doi.org/10.1126%2Fscience.abq1158. | 2307.02053#30 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02485 | 30 | Lack of effective reasoning on low-level actions. To help LLMs better focus on solving the overall task, we abstract high-level plans for LLMs to directly reason on, reducing the potential decision space signiï¬cantly, but also making it unaware of the execution of low-level actions, and impossible to reason over the low-level actions, which may lead to plausible but ineffective decisions. For example in Figure 6a, Alice saw Bob holding a container and a target object in both hands and ï¬gured he may not know how to utilize the containers, so send a message to instruct him to put the object into the container, though Bob was actually putting in the objects at the same time, which is impossible for Alice to reason over now. Developing agents that can directly make low-level controls is essential for building better cooperative agents.
Unstable performance on complex reasoning. Although LLMs make correct reasoning most of the time, they still occasionally make mistakes, including misunderstanding the environment rules speciï¬ed in the prompt, and incorrect reasoning over the number of unsatisï¬ed goals (Figure 6b). These mistakes can cause failures in planning. This calls for developing LLMs with stronger instruction following and reasoning capability.
# 5 Conclusion | 2307.02485#30 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 30 | +
# [FDS
23] Daniel Y. Fu, Tri Dao, Khaled Kamal Saab, Armin W. Thomas, Atri Rudra, and Christopher Ré. Hungry hungry hippos: Towards language modeling with state space models. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
+
[FPB 23] Mahan Fathi, Jonathan Pilault, Pierre-Luc Bacon, Christopher Pal, Orhan Firat, and Ross Goroshin. Block-state transformer. CoRR, abs/2306.09539, 2023.
[FZS21] William Fedus, Barret Zoph, and Noam Shazeer. Switch transformers: Scaling to trillion parameter models with simple and efï¬cient sparsity. CoRR, abs/2101.03961, 2021.
[GGR22] Albert Gu, Karan Goel, and Christopher Ré. Efï¬ciently modeling long sequences In The Tenth International Conference on Learning with structured state spaces. Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
11
+ | 2307.02486#30 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 30 | IFS evaluation of an SFT process of LLaMA 7B and 13B shows that instruction tone is learned rel- atively early. The supplementary metric ObjecQA was proposed to contrast the tone learning curve with the acquisition of semantic and domain- speciï¬c knowledge. Key results show that the inspected modelsâ instruction tuning capabilities (format-infusion phase) plateau at 0.9-0.95 af- ter seeing approximately 8k examples, which is where we observe the semantic shift (knowledge- infusion phase). Bigger models reached a 0.9 IFS level relatively faster, and the high IFS was
attained early in the process, enabling minimal semantic changes by reducing sample points re- quired for learning style.
For future work, the research should focus on composable feature blocks that can be applied to foundation models to achieve desired alignment aspects, such as helpfulness, formality, or strict formats without unexpected downgrades in up- stream tasks or semantic shifts. The response tone classiï¬er developed in this study serves as a starting point for the concept of designing chat interfaces for foundation models.
# References
Taori, Rohan et al. (2023). Stanford Alpaca: An Instruction-following LLaMA model. https:// github.com/tatsu-lab/stanford_alpaca. | 2307.03692#30 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 31 | However, solely relying on language models to encode item descriptions might excessively emphasize text features. To mitigate this issue, VQ-Rec [70] proposes to learn vector-quantized item representations, which can map item text into a vector of discrete indices (i.e., item codes) and use them to retrieve item representations from a code embedding table in recommendations. Beyond text features, Fan et al. [71] propose a novel method for the Zero-Shot Item-based Recommendation (ZSIR), focusing on introducing a Product Knowledge Graph (PKG) to LLMs to refine item features. More specifically, user and item embeddings are learned via multiple pre-training tasks upon the PKG. Moreover, ShopperBERT [72] investigates modeling user behaviors to denote user representations in e-commerce recommender systems, which pre-trains user embedding through several pre-training tasks based on user purchase history. Furthermore, IDA-SR [72], an ID-Agnostic User Behavior Pre-training framework for Sequential Recommendation, directly retains representations from text information using pre-trained language models like BERT. Specifically, given an item i and its description with m tokens Di = {t1, t2, | 2307.02046#31 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 31 | Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Xiaodong Song, and Jacob Steinhardt. Measuring coding challenge competence with apps. ArXiv, abs/2105.09938, 2021a.
Hamel Husain, Hongqi Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. Codesearch- net challenge: Evaluating the state of semantic code search. ArXiv, abs/1909.09436, 2019a.
Hamel Husain, Ho-Hsiang Wu, Tiferet Gazit, Miltiadis Allamanis, and Marc Brockschmidt. arXiv preprint CodeSearchNet challenge: Evaluating the state of semantic code search. arXiv:1909.09436, 2019b. | 2307.02053#31 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02485 | 31 | # 5 Conclusion
In this work, we propose a novel framework to leverage Large Language Models to build cooperative embodied agents that can plan, communicate and collaborate with other agents and humans to accomplish long-horizon tasks efï¬ciently. Our experiments on two extended embodied multi-agent
9
cooperation environments show the effectiveness of our proposed framework and exhibit several cooperative behaviors. We also discover that LLMs-based agents who communicate in natural language can cooperate better with humans and earn more trust from them. We believe that our work indicates promising future avenues to design even stronger embodied agents with Large Language Models for multi-agent cooperation. We further perform an in-depth analysis of the limitation of the current LLMs and highlight several potential solutions for building Embodied LLMs for the future.
# References
[1] M. Ahn, A. Brohan, N. Brown, Y. Chebotar, O. Cortes, B. David, C. Finn, K. Gopalakrishnan, K. Hausman, A. Herzog, et al. Do as i can, not as i say: Grounding language in robotic affordances. arXiv preprint arXiv:2204.01691, 2022. | 2307.02485#31 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 31 | 11
+
19] Yanping Huang, Youlong Cheng, Ankur Bapna, Orhan Firat, Dehao Chen, Mia Xu Chen, HyoukJoong Lee, Jiquan Ngiam, Quoc V. Le, Yonghui Wu, and Zhifeng Chen. Gpipe: Efï¬cient training of giant neural networks using pipeline parallelism. In NeurIPS 2019, pages 103â112, 2019.
+
# [HDW
23] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. Language is not all you need: Aligning perception with language models. ArXiv, abs/2302.14045, 2023.
[HZRS16] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. Deep residual learning for image recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 770â778, 2016.
+
# [JGB | 2307.02486#31 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 31 | Wang, Yizhong et al. (2023). Self-Instruct: Aligning Language Models with Self-Generated Instruc- tions. arXiv: 2212.10560 [cs.CL].
Longpre, Shayne et al. (2023). The Flan Collection: Designing Data and Methods for Effective Instruction Tuning. arXiv: 2301.13688 [cs.AI].
Zhou, Chunting et al. (2023). LIMA: Less Is More for Alignment. arXiv: 2305.11206 [cs.CL]. Anand, Yuvanesh et al. (2023). GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo. https://github.com/nomic-ai/gpt4all.
ene tee: ZT tat -con/ nomi cai /eptaata|
Touvron, Hugo et al. (2023). LLaMA: Open and Efï¬cient Foundation Language Models. arXiv: 2302.13971 [cs.CL].
Zhang, Susan et al. (2022). OPT: Open Pre-trained Transformer Language Models. arXiv: 2205. 01068 [cs.CL]. | 2307.03692#31 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 32 | retains representations from text information using pre-trained language models like BERT. Specifically, given an item i and its description with m tokens Di = {t1, t2, ..., tm}, an extra start-of- sequence token [CLS] is added to the description Di = {[CLS], t1, t2, ..., tm}. Then, the description is fed as the input to LLMs. Finally, the embedding of the token [CLS] could be used as the ID-agnostic item representation. | 2307.02046#32 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 32 | Yujia Li, David Choi, Junyoung Chung, Nate Kushman, Julian Schrittwieser, Rémi Leblond, Tom Eccles, James Keeling, Felix Gimeno, Agustin Dal Lago, Thomas Hubert, Peter Choy, Cyprien de Masson dâAutume, Igor Babuschkin, Xinyun Chen, Po-Sen Huang, Johannes Welbl, Sven Gowal, Alexey Cherepanov, James Molloy, Daniel Mankowitz, Esme Sutherland Robson, Pushmeet Kohli, Nando de Freitas, Koray Kavukcuoglu, and Oriol Vinyals. Competition-level code generation with alphacode. arXiv preprint arXiv:2203.07814, 2022b.
8
Dan Hendrycks, Steven Basart, Saurav Kadavath, Mantas Mazeika, Akul Arora, Ethan Guo, Collin Burns, Samir Puranik, Horace He, Dawn Song, and Jacob Steinhardt. Measuring coding challenge competence with apps. NeurIPS, 2021b.
Sahil Chaudhary. Code alpaca: An instruction-following llama model for code generation. https: //github.com/sahil280114/codealpaca, 2023. | 2307.02053#32 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 32 | # SESE
# [5 wlo0-CoT
# mmm w/0-CoT
# =e CCC
# +++ Random
Figure 2: Main results. The blue and orange bars represent the default and counterfactual conditions respectively, either with or without 0-shot chain-of-thought (0-CoT) (except code generation; see §A.2). CCC is the counterfactual comprehension check (§2.1), but when applicable, we report it for the default setting too. Random performance is marked whenever nontrivial. PaLM-2 here is not the largest version (§4). The CCC for code execution/generation are identical. For spatial reasoning, we average the results from all rotation degrees. Counterfactual performance is consistently lower than the default task performance, while CCC is usually high. §C reports numeric results.
# GPT-4
# Claude
# PaLM-2 | 2307.02477#32 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 32 | [2] B. Baker, I. Kanitscheider, T. Markov, Y. Wu, G. Powell, B. McGrew, and I. Mordatch. Emergent tool use from multi-agent autocurricula. arXiv preprint arXiv:1909.07528, 2019.
[3] N. Bard, J. N. Foerster, S. Chandar, N. Burch, M. Lanctot, H. F. Song, E. Parisotto, V. Dumoulin, S. Moitra, E. Hughes, et al. The hanabi challenge: A new frontier for ai research. Artiï¬cial Intelligence, 280:103216, 2020.
[4] D. Batra, A. X. Chang, S. Chernova, A. J. Davison, J. Deng, V. Koltun, S. Levine, J. Malik, I. Mordatch, R. Mottaghi, et al. Rearrangement: A challenge for embodied ai. arXiv preprint arXiv:2011.01975, 2020. | 2307.02485#32 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 32 | +
# [JGB
21] Andrew Jaegle, Felix Gimeno, Andy Brock, Oriol Vinyals, Andrew Zisserman, and João Carreira. Perceiver: General perception with iterative attention. In Marina Meila and Tong Zhang, editors, Proceedings of the 38th International Conference on Machine Learning, ICML 2021, 18-24 July 2021, Virtual Event, volume 139 of Proceedings of Machine Learning Research, pages 4651â4664. PMLR, 2021.
+
22] Vijay Korthikanti, Jared Casper, Sangkug Lym, Lawrence McAfee, Michael Andersch, Mohammad Shoeybi, and Bryan Catanzaro. Reducing activation recomputation in large transformer models. CoRR, abs/2205.05198, 2022.
[KKL20] Nikita Kitaev, Lukasz Kaiser, and Anselm Levskaya. Reformer: The efï¬cient trans- former. In 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net, 2020.
+ | 2307.02486#32 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 32 | Zhang, Susan et al. (2022). OPT: Open Pre-trained Transformer Language Models. arXiv: 2205. 01068 [cs.CL].
Gao, Leo et al. (2020). âThe Pile: An 800GB Dataset of Diverse Text for Language Modelingâ. In: arXiv preprint arXiv:2101.00027.
Writer (2023). Palmyra LLMs empower secure, enterprise-grade generative AI for business. Writer Blog. URL: https://writer.com/blog/palmyra/.
Gudibande, Arnav et al. (2023). The False Promise of Imitating Proprietary LLMs. arXiv: 2305. 15717 [cs.CL].
OpenAI (2022). ChatGPT: Optimizing language models for dialogue. URL: https://online- chatgpt.com/.
8
Pichai, Sundar (2023). An important next step on our AI journey. Google AI Blog. URL: https: //blog.google/intl/en- africa/products/explore- get- answers/an- important- next-step-on-our-ai-journey/.
AnthropicAI (2023). Introducing Claude. URL: https : / / www . anthropic . com / index / introducing-claude. | 2307.03692#32 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 33 | 4 PRE-TRAINING & FINE-TUNING LLMS FOR REC- OMMENDER SYSTEMS In general, there are three key manners in developing and deploying LLMs in recommendation tasks, namely,
5
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Probability of words [0.1% | 3% |... {0.7% Probability of words |0.6% | 1% |... {0.2% Pre-training highest, _ yhighest [CLS]} [wi] [W2 Wopredict | [Wn] |[SEP] Ww Wo} [W3 ot + A A A OA @Q**+ Q: turadie Large corpus LLMs LLMs ; unlabeled data 4 = 4 4 * 4 4 [CLS]} [wi] [W2] we [MASK]| [wn| |[SEP] [SEP]| [wi] [wo (Masked Language Modeling) (Next Token Prediction) | 2307.02046#33 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 33 | # GPT-4
# Claude
# PaLM-2
GPT-3.5 ope ~. 100 - a of - vwof - Melody - - a note retrieval n hk a ° % ~ - in a melody [Lees eee | See SS TN YN TN YN TN YN YN YN C Major? ope oe 100 . op - 100 Chess n ~ Opening sequence identification YN YN YN Regular Board State? 109. 100. im) i a 100, 100. SET Game ¢ . _ a Identification of Bs Fa - of - 5 . missing card 8 from a SET < Less fdeseeeee] of isedeieeeeene L. Leedeffeseeess oLesed Heese TN YN TN YN TN YN TN YN Regular Rule? 3 wio 0-CoT mam w/ 0-CoT =e CCC += =+Random
# n-th
# legality
# the | 2307.02477#33 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 33 | [5] S. Brodeur, E. Perez, A. Anand, F. Golemo, L. Celotti, F. Strub, J. Rouat, H. Larochelle, and A. Courville. Home: A household multimodal environment. arXiv preprint arXiv:1711.11017, 2017.
[6] S. Bubeck, V. Chandrasekaran, R. Eldan, J. Gehrke, E. Horvitz, E. Kamar, P. Lee, Y. T. Lee, Y. Li, S. Lundberg, H. Nori, H. Palangi, M. T. Ribeiro, and Y. Zhang. Sparks of artiï¬cial general intelligence: Early experiments with gpt-4, 2023.
[7] M. Carroll, R. Shah, M. K. Ho, T. Grifï¬ths, S. Seshia, P. Abbeel, and A. Dragan. On the utility of learning about humans for human-ai coordination. Advances in neural information processing systems, 32, 2019. | 2307.02485#33 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 33 | +
22] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The stack: 3 TB of permissively licensed source code. CoRR, abs/2211.15533, 2022.
+
20] Jared Kaplan, Sam McCandlish, Tom Henighan, Tom B. Brown, Benjamin Chess, Rewon Child, Scott Gray, Alec Radford, Jeffrey Wu, and Dario Amodei. Scaling laws for neural language models. CoRR, abs/2001.08361, 2020.
[KVPF20] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Trans- formers are rnns: Fast autoregressive transformers with linear attention. In Proceedings of the 37th International Conference on Machine Learning, ICML 2020, 13-18 July 2020, Virtual Event, volume 119 of Proceedings of Machine Learning Research, pages 5156â5165. PMLR, 2020.
+ | 2307.02486#33 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 33 | AnthropicAI (2023). Introducing Claude. URL: https : / / www . anthropic . com / index / introducing-claude.
Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean (2015). Distilling the Knowledge in a Neural Network. arXiv: 1503.02531 [stat.ML].
Liang, Percy et al. (2022). Holistic Evaluation of Language Models. arXiv: 2211.09110 [cs.CL]. Kwiatkowski, Tom et al. (2019). âNatural Questions: a Benchmark for Question Answering Researchâ. In: Transactions of the Association of Computational Linguistics.
Huggingface (2023b). Open LLM Leaderboard. Accessed: 2023-06-10. URL: https : / / huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard.
Gao, Leo et al. (Sept. 2021). A framework for few-shot language model evaluation. Version v0.0.1. DOI: 10.5281/zenodo.5371628. URL: https://doi.org/10.5281/zenodo.5371628. | 2307.03692#33 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 34 | Figure 3: An illustration of two main pre-training methods of LLMs: Masked Language Modeling (left) which randomly masks tokens or spans in the sequence and requires LLMs to generate the masked tokens or spans based on the remaining context, and Next Token Prediction (right) which requires prediction for the next token based on the given context. In pre-training, LLMs are trained on a vast amount of corpus consisting of diverse and unlabeled data.
Table 1: Pre-training methods for LLM-empowered RecSys.
Paradigms Methods Pre-training Tasks Code Availability Pre-training PTUM [73] M6 [60] P5 [62] Masked Behavior Prediction Next K Behavior Prediction Auto-regressive Generation Multi-task Modeling https://github.com/wuch15/PTUM Not available https://github.com/jeykigung/P5
pre-training, fine-tuning, and prompting. In this section, we first introduce the pre-training and fine-tuning paradigms, which are shown in Figure 3 and Figure 4, respectively. More specifically, we will focus on the specific pre-training tasks applied in LLMs for recommender systems and fine- tuning strategies for better performance in downstream recommendation tasks. Note that the works mentioned below are summarized in Table 1 and Table 2.
# 4.1 Pre-training Paradigm for Recommender Systems | 2307.02046#34 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 34 | # n-th
# legality
# the
Figure 3: Main results (continued). The blue and orange bars represent the default and counterfactual conditions respectively, either with or without 0-shot chain-of-thought (0-CoT). CCC is the counterfactual comprehension check (§2.1), but when applicable, we report it for the default setting too. Random performance is marked whenever nontrivial. PaLM-2 here is not the largest version (§4). Counterfactual performance is consistently lower than the default task performance, while CCC is usually high. §C reports numeric results.
# 5 Analysis
We now investigate how a variety of factors affect the default and counterfactual performance trends that we observed in §4. Unless otherwise specified, we only consider GPT-4 with 0-shot CoT, which has the strongest performance in our results above.
performance. These correlations between the coun- terfactual performance and the commonness of the counterfactual worlds paint a more fine-grained picture than a binary default versus counterfactual distinction and point to a memorization-like effect where the models perform better under more com- mon conditions.
# âCommonnessâ of Counterfactual Conditions
# 5.2 Proximity between Default and Counterfactual Conditions | 2307.02477#34 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 34 | [8] A. Das, T. Gervet, J. Romoff, D. Batra, D. Parikh, M. Rabbat, and J. Pineau. Tarmac: Targeted multi-agent communication. In International Conference on Machine Learning, pages 1538â 1546. PMLR, 2019.
[9] M. Deitke, D. Batra, Y. Bisk, T. Campari, A. X. Chang, D. S. Chaplot, C. Chen, C. P. DâArpino, K. Ehsani, A. Farhadi, et al. Retrospectives on the embodied ai workshop. arXiv preprint arXiv:2210.06849, 2022. | 2307.02485#34 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 34 | +
19] Shiyang Li, Xiaoyong Jin, Yao Xuan, Xiyou Zhou, Wenhu Chen, Yu-Xiang Wang, and Xifeng Yan. Enhancing the locality and breaking the memory bottleneck of transformer on time series forecasting. ArXiv, abs/1907.00235, 2019.
+
19] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam R. Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In Kamalika Chaudhuri and Ruslan Salakhutdinov, editors, Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 3744â3753. PMLR, 2019.
+
21] Dmitry Lepikhin, HyoukJoong Lee, Yuanzhong Xu, Dehao Chen, Orhan Firat, Yanping Huang, Maxim Krikun, Noam Shazeer, and Zhifeng Chen. Gshard: Scaling giant models with conditional computation and automatic sharding. In ICLR 2021, 2021. | 2307.02486#34 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 34 | Chen, Hao et al. (2023). Maybe Only 0.5% Data is Needed: A Preliminary Exploration of Low Training Data Instruction Tuning. arXiv: 2305.09246 [cs.AI].
Kumar, Ananya et al. (2022). Fine-Tuning can Distort Pretrained Features and Underperform Out-of-Distribution. arXiv: 2202.10054 [cs.LG].
Brown, Tom B. et al. (2020). Language Models are Few-Shot Learners. arXiv: 2005.14165 [cs.CL]. Radford, Alec et al. (2018). âLanguage Models are Unsupervised Multitask Learnersâ. In: URL: https : / / d4mucfpksywv . cloudfront . net / better - language - models / language - models.pdf.
Ouyang, Long et al. (2022). Training language models to follow instructions with human feedback. arXiv: 2203.02155 [cs.CL].
Schulman, John et al. (2017). Proximal Policy Optimization Algorithms. arXiv: 1707 . 06347 [cs.LG].
Hendrycks, Dan et al. (2021). Measuring Massive Multitask Language Understanding. arXiv: 2009. 03300 [cs.CY]. | 2307.03692#34 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 35 | # 4.1 Pre-training Paradigm for Recommender Systems
Pre-training is an important step in developing LLMs. It involves training LLMs on a vast amount of corpus consisting of diverse and unlabeled data. This strategy enables LLMs to acquire a broad understanding of various linguistic aspects, including grammar, syntax, semantics, and even common sense reasoning. Through pre-training, LLMs can learn to recognize and generate coherent and contextually appropriate responses. In general, there are two main methods to pre-train LLMs in the natural language domain, depending on the adopted model structure. One is Masked Language Modeling (MLM) for encoder-only or encoder-decoder Transformer structures, which randomly masks tokens or spans in the sequence and requires LLMs to generate the masked tokens or spans based on the remaining [82]. The other is Next Token Prediction (NTP) context for decoder-only Transformer structures, which requires prediction for the next token based on the given context [41].
a span of tokens, PTUM only masks a single user behavior with the goal of predicting the masked behavior based on the other behaviors in the interaction sequence of the target user. On the other side, NBP models the relevance between past and future behaviors, which is crucial for user modeling. The goal of NBP is to predict the next k behaviors based on the user-item interaction history. | 2307.02046#35 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 35 | # âCommonnessâ of Counterfactual Conditions
# 5.2 Proximity between Default and Counterfactual Conditions
Our counterfactual worlds are not designed to be completely alien to the LMs but only less common than the assumed default case. In this sense, the counterfactual-ness of these worlds is relative, and here we take a more nuanced look at how the com- monness of these counterfactual conditions affects the default-counterfactual performance gap. For example, in the arithmetic task, all models perform better in bases 8 and 16, likely due to their rela- tive abundance compared to bases 9 and 11. In spatial reasoning, the smallest counterfactual per- formance degradation is usually from when the north and south directions are swappedâeven ex- ceeding the default task performance for PaLM-2â potentially because some programming libraries use an inverted y-axis, such as matplotlib (Python), ggplot (R), and D3 (JavaScript) (see §A.5). For chord fingering, the common alternative drop-D tuning of guitars leads to the highest counterfactual | 2307.02477#35 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 35 | [10] D. Driess, F. Xia, M. S. M. Sajjadi, C. Lynch, A. Chowdhery, B. Ichter, A. Wahid, J. Tompson, Q. Vuong, T. Yu, W. Huang, Y. Chebotar, P. Sermanet, D. Duckworth, S. Levine, V. Vanhoucke, K. Hausman, M. Toussaint, K. Greff, A. Zeng, I. Mordatch, and P. Florence. Palm-e: An embodied multimodal language model. In arXiv preprint arXiv:2303.03378, 2023.
[11] C. Gan, J. Schwartz, S. Alter, D. Mrowca, M. Schrimpf, J. Traer, J. De Freitas, J. Kubilius, A. Bhandwaldar, N. Haber, et al. Threedworld: A platform for interactive multi-modal physical simulation. arXiv preprint arXiv:2007.04954, 2020. | 2307.02485#35 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 35 | [LXLY21] Shenggui Li, Fuzhao Xue, Yongbin Li, and Yang You. Sequence parallelism: Making 4d parallelism possible. CoRR, abs/2105.13120, 2021.
12
+
# [MKW
21] Xuezhe Ma, Xiang Kong, Sinong Wang, Chunting Zhou, Jonathan May, Hao Ma, and Luke Zettlemoyer. Luna: Linear uniï¬ed nested attention. In MarcâAurelio Ranzato, Alina Beygelzimer, Yann N. Dauphin, Percy Liang, and Jennifer Wortman Vaughan, editors, Advances in Neural Information Processing Systems 34: Annual Conference on Neural Information Processing Systems 2021, NeurIPS 2021, December 6-14, 2021, virtual, pages 2441â2453, 2021.
+
22] Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, and Furu Wei. TorchScale: Transformers at scale. CoRR, abs/2211.13184, 2022.
+ | 2307.02486#35 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 35 | Hendrycks, Dan et al. (2021). Measuring Massive Multitask Language Understanding. arXiv: 2009. 03300 [cs.CY].
Köpf, Andreas et al. (2023). OpenAssistant Conversations â Democratizing Large Language Model Alignment. arXiv: 2304.07327 [cs.CL].
Huggingface (2023a). AutoTrain: Create powerful AI models without code. URL: https : / / huggingface.co/autotrain.
Wei, Jason et al. (2022). Emergent Abilities of Large Language Models. arXiv: 2206.07682 [cs.CL].
9 | 2307.03692#35 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 36 | M6 [60] also adopts two pre-training objectives motivated by the two classical pre-training tasks, namely a text- infilling objective and an auto-regressive language generation objective, corresponding to the above two pre-training tasks, respectively. To be more specific, the text-infilling objective exhibits the pre-training task of BART [83], which randomly masks a span with several tokens in the text sequence and predicts these masked spans as the pre-training target, providing the capability to assess the plausibility of a text or an event in the recommendation scoring tasks. Meanwhile, the auto-regressive language generation objective follows the Next Token Prediction task in natural language pre-training, but it is slightly different as it predicts the unmasked sentence based on the masked sequence.
Additionally, P5 adopts multi-mask modeling and mixes datasets of various recommendation tasks for pre-training. In this case, it can be generalized to various recommendation tasks and even unseen tasks with zero-shot generation ability [62]. Across different recommendation tasks, P5 applies a unified indexing method for representing users and items in language sequence as stated in Section 3 so that the Masked Language Modelling task could be employed. | 2307.02046#36 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 36 | Another axis along which the counterfactual worlds differ is in their proximity to the default condi- tions. For example, for the different arithmetic bases, bases 9 and 11 are closer to base 10, but less common than bases 8 and 16. While the default- counterfactual gap is most affected by commonness for the arithmetic task, for the guitar and ukulele tunings (other than the drop-D tuning), the LM per- formance generally decreases monotonically with the distance from the original tunings.
The FOLIO dataset (Han et al., 2022) enables an- other analysis of how proximity to the default con- ditions affects the model performance, without per- forming counterfactual perturbations. This dataset was constructed to mostly follow common sense, i.e., containing premises and conclusions that are deemed true in the real world. However, this is not always the case, with premises such as âJohn can make meals which are popular at the party,â whose
00 3S 8 = Default 50 Accuracy Accuracy (%) Random 0 0 00 40 30 Default 50 20 Accuracy (%) 6dom7 cr min? Accuracy - CF (%) aug? dint 4 i323 3 4 5 6 7 3 Number of Digits Note Index 6 25 30 Accuracy - Default (%) 1 75 2 Number of Cards to Find
(%) | 2307.02477#36 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 36 | [12] C. Gan, S. Zhou, J. Schwartz, S. Alter, A. Bhandwaldar, D. Gutfreund, D. L. Yamins, J. J. DiCarlo, J. McDermott, A. Torralba, et al. The threedworld transport challenge: A visually guided task-and-motion planning benchmark towards physically realistic embodied ai. In 2022 International Conference on Robotics and Automation (ICRA), pages 8847â8854. IEEE, 2022.
[13] M. Gramopadhye and D. Szaï¬r. Generating executable action plans with environmentally-aware language models. arXiv preprint arXiv:2210.04964, 2022.
[14] S. Huang, L. Dong, W. Wang, Y. Hao, S. Singhal, S. Ma, T. Lv, L. Cui, O. K. Mohammed, Q. Liu, et al. Language is not all you need: Aligning perception with language models. arXiv preprint arXiv:2302.14045, 2023.
10 | 2307.02485#36 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 36 | +
[PDB 22] Zhiliang Peng, Li Dong, Hangbo Bao, Qixiang Ye, and Furu Wei. BEiT v2: Masked image modeling with vector-quantized visual tokenizers. 2022.
+
23] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. CoRR, abs/2302.10866, 2023.
+
23] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. ArXiv, abs/2306, 2023.
+
22] Zhen Qin, Xiaodong Han, Weixuan Sun, Dongxu Li, Lingpeng Kong, Nick Barnes, and Yiran Zhong. The devil in linear transformer. In Yoav Goldberg, Zornitsa Kozareva, and Yue Zhang, editors, Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, EMNLP 2022, Abu Dhabi, United Arab Emirates, December 7-11, 2022, pages 7025â7041. Association for Computational Linguistics, 2022.
+ | 2307.02486#36 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 37 | In the context of recommender systems, most of the existing works follow the two classical pre-training strategies. Next, we will introduce representative methods. PTUM [73] proposes two similar pre-training tasks, Masked Behavior Prediction (MBP) and Next K behavior Prediction (NBP), to model user behaviors in recommender systems. Unlike language tokens, user behaviors are more diverse and thus more difficult to be predicted. In this case, instead of masking
# 4.2 Fine-tuning Paradigm for Recommender Systems
Fine-tuning is a crucial step in deploying pre-trained LLMs for specific downstream tasks. Especially for rec- ommendation tasks, LLMs require fine-tuning to grasp more domain knowledge. Particularly, fine-tuning paradigm involves training the pre-trained model based on task-specific
6
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
Fine-tuning Small corpus task-specific data Grounding output Y Input > Outputâ>} Update: Loss (Full-model Fine-tuning) Input > |-->LLMs-;> > Output > ieee Grounding output Loss e.g., Adapters |< Update (Parameter-efficient Fine-tuning) | 2307.02046#37 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 37 | (%)
(a) Addition accuracy with varying numbers of digits in the operands. (b) Melody note recall accu- racy broken down by the index of the note. (c) SET identification accuracy when needing to find different numbers of cards in a SET. (d) Fret placement accuracy by chord type. The y-axis aver- ages over all altered tunings.
Figure 4: Investigating the relationship between the default task performance and counterfactual performance, broken down by different factors. Only GPT-4 with 0-shot CoT results are shown. There is a consistent default- counterfactual correlation across tasks and when varying different factors.
tos) | GPT4 | GPT35 | Claude | PaLM-2 | cor a es a ss S am S$ ° b a == Coefficient Value 2S S s 0.25 }
# Premises % True % False = % Unknown Concl. Truth. Match
Figure 5: Logistic regression coefficients of features that predict whether an LM correctly predicts the label of an instance. âConcl. Truth. Matchâ is a binary feature that is 1 iff the instance label matches the (LM-believed) truthfulness of the conclusion. The 95% confidence intervals are also shown. LMs tend to predict more correctly when there are more true premises, when the instance label matches the conclusion truthfulness, but less correctly with more false and unknown premises. | 2307.02477#37 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 37 | 10
[15] W. Huang, P. Abbeel, D. Pathak, and I. Mordatch. Language models as zero-shot planners: Extracting actionable knowledge for embodied agents. In International Conference on Machine Learning, pages 9118â9147. PMLR, 2022.
[16] W. Huang, F. Xia, T. Xiao, H. Chan, J. Liang, P. Florence, A. Zeng, J. Tompson, I. Mordatch, Y. Chebotar, et al. Inner monologue: Embodied reasoning through planning with language models. arXiv preprint arXiv:2207.05608, 2022.
[17] M. Jaderberg, W. M. Czarnecki, I. Dunning, L. Marris, G. Lever, A. G. Castaneda, C. Beattie, N. C. Rabinowitz, A. S. Morcos, A. Ruderman, et al. Human-level performance in 3d multiplayer games with population-based reinforcement learning. Science, 364(6443):859â865, 2019. | 2307.02485#37 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 37 | +
22] Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. A length-extrapolatable transformer. CoRR, abs/2212.10554, 2022.
+
19] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-lm: Training multi-billion parameter language models using model parallelism. CoRR, abs/1909.08053, 2019.
[SWL23] Jimmy T. H. Smith, Andrew Warrington, and Scott W. Linderman. Simpliï¬ed state space layers for sequence modeling. In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
+
# [TDA | 2307.02486#37 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 38 | Figure 4: An illustration of two main fine-tuning methods of LLMs: Full-model Fine-tuning (left) which involves changing the entire model weights, and Parameter-efficient Fine-tuning (right) which involves fine-tuning a small proportion of model weights or a few extra trainable weights while fixing most of the parameters in LLMs. In fine-tuning, LLMs are trained on a relatively small amount of corpus (i.e., compared to the amount of corpus for pre-training) of task-specific data.
Table 2: Fine-tuning methods applied in LLM-empowered RecSys.
Paradigms Fine-tuning Methods Full-model Fine-tuning Parameter-efficient Fine-tuning References [74], [75], [76], [77], [78], [79], and [80]1 [59]2, [81], and [60] Code Availability: 1https://github.com/veason-silverbullet/unitrec, 2https://github.com/sai990323/tallrec | 2307.02046#38 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 38 | features, as well as their 95% confidence interval bootstrapping with 1,000 iterations (Efron and Tib- shirani, 1993). Ideally, a robust model should pre- dict solely based on symbolic deduction and ex- tralinguistic truthfulness information should not affect its accuracy. In other words, these features should all have coefficients 0 and have no predic- tive power with respect to the modelâs correctness. However, all LMs predict more correctly with more realistic (true) premises, and when the conclusionâs LM-predicted truthfulness matches the label. On the other hand, they tend to perform worse when there are more false or uncertain premises. Most of these trends are statistically significant. This means that the reasoning ability of LMs is affected by the distance between the real world (as believed by the LMs) which is the default condition and the world state under which reasoning is required.
Overall, these results show that LMs tend to perform better on task variants that are closer to the default instantiation of a task.
factuality cannot be determined alone. | 2307.02477#38 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 38 | [18] U. Jain, L. Weihs, E. Kolve, A. Farhadi, S. Lazebnik, A. Kembhavi, and A. Schwing. A cordial sync: Going beyond marginal policies for multi-agent embodied tasks. In Computer Visionâ ECCV 2020: 16th European Conference, Glasgow, UK, August 23â28, 2020, Proceedings, Part V 16, pages 471â490. Springer, 2020.
[19] U. Jain, L. Weihs, E. Kolve, M. Rastegari, S. Lazebnik, A. Farhadi, A. G. Schwing, and A. Kembhavi. Two body problem: Collaborative visual task completion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 6689â6699, 2019.
[20] N. Jaques, A. Lazaridou, E. Hughes, C. Gulcehre, P. Ortega, D. Strouse, J. Z. Leibo, and N. De Freitas. Social inï¬uence as intrinsic motivation for multi-agent deep reinforcement learning. In International conference on machine learning, pages 3040â3049. PMLR, 2019. | 2307.02485#38 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 38 | +
# [TDA
21] Yi Tay, Mostafa Dehghani, Samira Abnar, Yikang Shen, Dara Bahri, Philip Pham, Jinfeng Rao, Liu Yang, Sebastian Ruder, and Donald Metzler. Long range arena : A benchmark for efï¬cient transformers. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net, 2021.
+
17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In NeurIPS 2017, pages 5998â6008, 2017.
+
23] Wenhui Wang, Hangbo Bao, Li Dong, Johan Bjorck, Zhiliang Peng, Qiang Liu, Kriti Aggarwal, Owais Khan Mohammed, Saksham Singhal, Subhojit Som, and Furu Wei. Image as a foreign language: BEiT pretraining for vision and vision-language tasks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2023.
+ | 2307.02486#38 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 39 | recommendation datasets that include user-item interaction behaviors (e.g., purchase, click, ratings) and side knowledge about users and items (e.g., usersâ social relations and itemsâ descriptions). This process allows the model to specialize its knowledge and parameters to improve performance in the recommendation domain. In general, fine-tuning strategies can be divided into two categories according to the proportion of model weights changed to fit the given task. One is full-model fine-tuning, which changes the entire model weights in the fine-tuning process. By considering the computation cost, the other is parameter-efficient fine-tuning, which aims to change only a small part of weights or develop trainable adapters to fit specific tasks.
preserving large-scale recommender systems by applying differentially private (DP) LLMs, which relieves certain challenges and limitations in DP training.
Contrastive learning has also emerged as a popular approach for fine-tuning LLMs in recommender systems. Several methods have been proposed in this direction. SBERT [79] introduces a triple loss function, where an intent sentence is paired with an anchor, and corresponding products are used as positive and negative examples in the e-commerce domain. Additionally, UniTRec [80] proposes a unified framework that combines discriminative matching scores and candidate text perplexity as contrastive objectives to improve text-based recommendations. | 2307.02046#39 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 39 | Overall, these results show that LMs tend to perform better on task variants that are closer to the default instantiation of a task.
factuality cannot be determined alone.
We evaluate how the distance between the world state described by the premises and the belief state of LMs influences LM performance by training a predictive model given features approximating this distance. For each test instance, we ask the LMs whether the premises and conclusion are true, false, or uncertain. We train a logistic regression model to predict LM correctness on each test instance, using as features the total number of premises in an input, the proportion of the premises that are true/- false/uncertain, as encoded by the LM, as well as whether the LM-predicted truthfulness of the con- clusion matches the label of the instance (that is, a feature that predicts the entailment/neutral/contra- diction label of the instance from the truthfulness of the conclusion alone, ignoring premises).
# 5.3 Relationship between Default vs. Counterfactual Performance
Recalling our formalization hLM(f, w, x) in §2, the previous two subsections analyzed how the com- monness of w and its proximity to wdefault affect the observed patterns. We now explore how the counterfactual performance correlates with the de- fault task performance by varying the other three elements in this formalization: the task f , the input x, and the LM. | 2307.02477#39 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 39 | [21] J. Jiang and Z. Lu. Learning attentional communication for multi-agent cooperation. Advances in neural information processing systems, 31, 2018.
[22] T. Kojima, S. S. Gu, M. Reid, Y. Matsuo, and Y. Iwasawa. Large language models are zero-shot reasoners. arXiv preprint arXiv:2205.11916, 2022.
[23] E. Kolve, R. Mottaghi, W. Han, E. VanderBilt, L. Weihs, A. Herrasti, M. Deitke, K. Ehsani, D. Gordon, Y. Zhu, et al. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474, 2017.
[24] C. Li, R. Zhang, J. Wong, C. Gokmen, S. Srivastava, R. MartÃn-MartÃn, C. Wang, G. Levine, M. Lingelbach, J. Sun, et al. Behavior-1k: A benchmark for embodied ai with 1,000 everyday activities and realistic simulation. In Conference on Robot Learning, pages 80â93. PMLR, 2023. | 2307.02485#39 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 39 | +
20] Genta Indra Winata, Samuel Cahyawijaya, Zhaojiang Lin, Zihan Liu, and Pascale Fung. Lightweight and efï¬cient end-to-end speech recognition using low-rank transformer. In 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2020, Barcelona, Spain, May 4-8, 2020, pages 6144â6148. IEEE, 2020.
13
+
23] Weizhi Wang, Li Dong, Hao Cheng, Xiaodong Liu, Xifeng Yan, Jianfeng Gao, and Furu Wei. Augmenting language models with long-term memory. CoRR, abs/2306.07174, 2023.
+
[WLK 20] Sinong Wang, Belinda Z. Li, Madian Khabsa, Han Fang, and Hao Ma. Linformer: Self-attention with linear complexity. CoRR, abs/2006.04768, 2020.
+
22] Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. DeepNet: Scaling transformers to 1,000 layers. CoRR, abs/2203.00555, 2022.
+ | 2307.02486#39 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02485 | 40 | [25] S. Li, X. Puig, C. Paxton, Y. Du, C. Wang, L. Fan, T. Chen, D.-A. Huang, E. Akyürek, A. Anandkumar, et al. Pre-trained language models for interactive decision-making. Advances in Neural Information Processing Systems, 35:31199â31212, 2022.
[26] J. Liang, W. Huang, F. Xia, P. Xu, K. Hausman, B. Ichter, P. Florence, and A. Zeng. Code as policies: Language model programs for embodied control. arXiv preprint arXiv:2209.07753, 2022.
[27] R. Lowe, A. Tamar, J. Harb, O. Pieter Abbeel, and I. Mordatch. Multi-agent actor-critic for mixed cooperative-competitive environments. Advances in neural information processing systems, 30, 2017.
[28] J. Lu, C. Clark, R. Zellers, R. Mottaghi, and A. Kembhavi. Uniï¬ed-io: A uniï¬ed model for vision, language, and multi-modal tasks. arXiv preprint arXiv:2206.08916, 2022. | 2307.02485#40 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 40 | +
22] Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, Barun Patra, Zhun Liu, Vishrav Chaudhary, Xia Song, and Furu Wei. Foundation transformers. CoRR, abs/2210.06423, 2022.
[WRHS22] Yuhuai Wu, Markus Norman Rabe, DeLesley Hutchins, and Christian Szegedy. Memo- rizing transformers. In The Tenth International Conference on Learning Representations, ICLR 2022, Virtual Event, April 25-29, 2022. OpenReview.net, 2022.
+
22] Barret Zoph, Irwan Bello, Sameer Kumar, Nan Du, Yanping Huang, Jeff Dean, Noam Shazeer, and William Fedus. Designing effective sparse expert models. CoRR, abs/2202.08906, 2022.
+ | 2307.02486#40 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 41 | As a straightforward strategy in deploying pre-trained LLMs to fit specific downstream recommendation tasks, full-model fine-tuning involves changing the entire model weights. For example, RecLLM [74] is proposed to fine-tune LaMDA as a Conversational Recommender System (CRS) for YouTube video recommendation. Meanwhile, GIRL [78] leverages a supervised fine-tuning strategy for instructing LLMs in job recommendation. However, directly fine-tuning LLMs might bring unintended bias into recommender systems, producing serious harm towards specific groups or individuals based on sensitive attributes such as gender, race and occupation. To mitigate such harmful effects, a simple LLMs-driven recommendation (LMRec) [75] is developed to alleviate the observed biases through train-side masking and test-side neutralization of non-preferential entities, which achieves satisfying results without significant performance drops. TransRec [76] studies pre-trained recommender systems in an end-to-end manner, by directly learning from the raw features of the mixture-of-modality items (i.e., texts and images). In this case, without relying on overlapped users or items, TransRec can be | 2307.02046#41 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 41 | difficulty (Figure 4b).8 For SET, while our original task shows two cards and asks a model to find the missing one from a 3-card SET, we change the task to instead only show one or none of the cards in a SET, while still requiring the model to identify the SET on a board (Figure 4c). For all these task variants, we see a strong correlation between the original and the counterfactual world performance. We also see this effect when breaking down re- sults by test instances. In Figure 4d, we separate the different chord types, and observe that the default task performance correlates with the counterfactual performance. Similarly, reexamining our main re- sults in Figures 2 and 3, for most tasks, stronger models under default conditions are also stronger models under counterfactual conditions, and vice versa. Overall, these correlations mean that the de- fault task performance can be a good indicator of its counterfactual performance, and hence we should not discount the utility of traditional evaluations.9 Furthermore, despite our evidence of LMsâ over- fitting to the default task conditions, these corre- lations also signify some degree of reasoning that is transferable between the default and counterfac- tual worlds. This highlights that the question in our title, | 2307.02477#41 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 41 | [29] D. Misra, A. Bennett, V. Blukis, E. Niklasson, M. Shatkhin, and Y. Artzi. Mapping instructions to actions in 3d environments with visual goal prediction. arXiv preprint arXiv:1809.00786, 2018.
[30] A. Padmakumar, J. Thomason, A. Shrivastava, P. Lange, A. Narayan-Chen, S. Gella, R. Pi- ramuthu, G. Tur, and D. Hakkani-Tur. Teach: Task-driven embodied agents that chat. In Proceedings of the AAAI Conference on Artiï¬cial Intelligence, volume 36, pages 2017â2025, 2022.
11
[31] V. Pallagani, B. Muppasani, K. Murugesan, F. Rossi, L. Horesh, B. Srivastava, F. Fabiano, and A. Loreggia. Plansformer: Generating symbolic plans using transformers. arXiv preprint arXiv:2212.08681, 2022. | 2307.02485#41 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 41 | +
20] Manzil Zaheer, Guru Guruganesh, Kumar Avinava Dubey, Joshua Ainslie, Chris Alberti, Santiago Ontañón, Philip Pham, Anirudh Ravula, Qifan Wang, Li Yang, and Amr Ahmed. Big bird: Transformers for longer sequences. In Hugo Larochelle, MarcâAurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020.
[ZKHB22] Xiaohua Zhai, Alexander Kolesnikov, Neil Houlsby, and Lucas Beyer. Scaling vision transformers. In IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2022, New Orleans, LA, USA, June 18-24, 2022, pages 1204â1213. IEEE, 2022.
14
# A Hyperparameters
Hyperparameters Value Layers Hidden size FFN size Heads 12 768 3072 12 Learning rate LR scheduler Warm-up steps Tokens per batch Adam β Training steps 6e-4 Polynomial decay 750 500K (0.9, 0.98) 300K Gradient clipping Dropout Weight decay 2.0 0.0 0.01 | 2307.02486#41 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 42 | features of the mixture-of-modality items (i.e., texts and images). In this case, without relying on overlapped users or items, TransRec can be effectively transferred to different scenarios. Additionally, Carranza et al. [77] propose privacyFull-model fine-tuning requires large computational re- sources as the size of LLMs scales up. Currently, it is infeasible for a single consumption-level GPU to fine-tune the most advanced LLMs, which usually have more than 10 billion parameters. In this case, Parameter-efficient Fine- tuning (PEFT) targets fine-tuning LLMs efficiently with lower requirements for computational resources. PEFT involves fine-tuning a small proportion of model weights or a few extra trainable weights while fixing most of the parameters in LLMs to achieve comparable performance with full-model fine-tuning. | 2307.02046#42 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 42 | Nangia, Niklas Deckers, Niklas Muennighoff, Nitish Shirish Keskar, Niveditha S. Iyer, Noah Constant, Noah Fiedel, Nuan Wen, Oliver Zhang, Omar Agha, Omar Elbaghdadi, Omer Levy, Owain Evans, Pablo Antonio Moreno Casares, Parth Doshi, Pascale Fung, Paul Pu Liang, Paul Vicol, Pegah Alipoormolabashi, Peiyuan Liao, Percy Liang, Peter Chang, Peter Eckersley, Phu Mon Htut, Pinyu Hwang, Piotr MiÅkowski, Piyush Patil, Pouya Pezeshkpour, Priti Oli, Qiaozhu Mei, Qing Lyu, Qinlang Chen, Rabin Banjade, Rachel Etta Rudolph, Raefer Gabriel, Rahel Habacker, Ramón Risco Delgado, Raphaël Millière, Rhythm Garg, Richard Barnes, Rif A. Saurous, Riku Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, | 2307.02053#42 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 42 | these corre- lations also signify some degree of reasoning that is transferable between the default and counterfac- tual worlds. This highlights that the question in our title, âReasoning or Reciting?â, is not a di- chotomy, but rather they can co-exist in a contin- uum. For example, revisiting the arithmetic results with more digits (Figure 4a), in addition to the default-counterfactual correlation, we also see an effect of memorization: the base-10 performance decreases much more slowly than the other bases. When the input-output mappings are memorized, increased complexity would not affect the default task accuracy much; but when the counterfactual instances are not memorized, the task complexity should inversely correlate with model performance. Occasionally, this default-counterfactual correla- tion trend is reversed. In the spatial reasoning task, for example, GPT-4 achieves the best accuracy un- der default conditions with 0-shot CoT, but it also suffers from the largest counterfactual performance degradation. PaLM-2 performs worse under default conditions, but is the most robust to counterfactual perturbations. An obvious possible explanation is that these models could be trained | 2307.02477#42 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 42 | [32] J. S. Park, J. C. OâBrien, C. J. Cai, M. R. Morris, P. Liang, and M. S. Bernstein. Generative agents: Interactive simulacra of human behavior. arXiv preprint arXiv:2304.03442, 2023.
[33] S. Patel, S. Wani, U. Jain, A. G. Schwing, S. Lazebnik, M. Savva, and A. X. Chang. Inter- pretation of emergent communication in heterogeneous collaborative embodied agents. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pages 15953â 15963, 2021.
[34] X. Puig, K. Ra, M. Boben, J. Li, T. Wang, S. Fidler, and A. Torralba. Virtualhome: Simulating household activities via programs. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2018.
[35] X. Puig, T. Shu, S. Li, Z. Wang, Y.-H. Liao, J. B. Tenenbaum, S. Fidler, and A. Torralba. Watch-and-help: A challenge for social perception and human-ai collaboration. In International Conference on Learning Representations, 2021. | 2307.02485#42 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 43 | Currently, the most popular PEFT methods lie in intro- ducing extra trainable weights as adapters. The adapter structure is designed for embedding into the transformer structure of LLMs [84]. For each Transformer layer, the adapter module is added twice: the first module is added after the projection following the multi-head attention, and the other is added after the two feed-forward layers. During fine-tuning, the original weights of pre-trained LLMs are fixed, while the adapters and layer normalization layers are fine-tuned to fit downstream tasks. Thus, adapters contribute to the expansion and generalization of LLMs, relieving the
7
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
problem of full-model fine-tuning and catastrophic forgetting. Inspired by the idea of adapters and low intrinsic ranks of weight matrices in LLMs, Low-Rank Adaptation of LLMs (LoRA) [85] introduces low-rank decomposition to simulate the change of parameters. Basically, LoRA adds a new pathway to specific modules handling matrix multiplication in the original structure of the LLMs. In the pathway, two serial matrices first reduce the dimension to a pre- defined dimension of the middle layer and then increase the dimension back. In this case, the dimension of the middle layer could simulate the intrinsic rank. | 2307.02046#43 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 43 | Arakawa, Robbe Raymaekers, Robert Frank, Rohan Sikand, Roman Novak, Roman Sitelew, Ronan LeBras, Rosanne Liu, Rowan Jacobs, Rui Zhang, Ruslan Salakhutdinov, Ryan Chi, Ryan Lee, Ryan Stovall, Ryan Teehan, Rylan Yang, Sahib Singh, Saif M. Mohammad, Sajant Anand, Sam Dillavou, Sam Shleifer, Sam Wiseman, Samuel Gruetter, Samuel R. Bowman, Samuel S. Schoen- holz, Sanghyun Han, Sanjeev Kwatra, Sarah A. Rous, Sarik Ghazarian, Sayan Ghosh, Sean Casey, Sebastian Bischoff, Sebastian Gehrmann, Sebastian Schuster, Sepideh Sadeghi, Shadi Hamdan, Sharon Zhou, Shashank Srivastava, Sherry Shi, Shikhar Singh, Shima Asaadi, Shixiang Shane Gu, Shubh Pachchigar, Shubham Toshniwal, Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva | 2307.02053#43 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02485 | 43 | [36] X. Puig, T. Shu, J. B. Tenenbaum, and A. Torralba. Nopa: Neurally-guided online probabilistic assistance for building socially intelligent home assistants. arXiv preprint arXiv:2301.05223, 2023.
[37] S. S. Raman, V. Cohen, E. Rosen, I. Idrees, D. Paulius, and S. Tellex. Planning with large language models via corrective re-prompting. arXiv preprint arXiv:2211.09935, 2022.
[38] C. Resnick, W. Eldridge, D. Ha, D. Britz, J. Foerster, J. Togelius, K. Cho, and J. Bruna. Pommerman: A multi-agent playground. arXiv preprint arXiv:1809.07124, 2018. | 2307.02485#43 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02046 | 44 | In recommender systems, PEFT can greatly reduce the computational cost of fine-tuning LLMs for recommendation tasks, which requires less update and maintains most of the model capabilities. TallRec [59] introduces an efficient and effective tuning framework on the LLaMA-7B model and LoRA for aligning LLMs with recommendation tasks, which can be executed on a single RTX 3090. GLRec [81] also takes the adavantage of LoRA for fine-tuning and adapting LLMs as job recommender. Moreover, M6 [60] also applies LoRA fine-tuning, making it feasible to deploy LLMs in phone devices.
# 5 PROMPTING LLMS FOR RECOMMENDER SYS- TEMS
Apart from the pre-training & fine-tuning paradigm, prompt- ing serves as the latest paradigm for adapting LLMs to specific downstream tasks with the help of task-specific prompts. A prompt refers to a text template that can be applied to the input of LLMs. For example, a prompt âThe is .â can be designed to deploy LLMs relation between and for relation extraction tasks. Prompting enables LLMs to unify different downstream tasks into language generation tasks, which are aligned to their objectives during pre- training [86]. | 2307.02046#44 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 44 | Shyam Upadhyay, Shyamolima, Debnath, Siamak Shakeri, Simon Thormeyer, Simone Melzi, Siva Reddy, Sneha Priscilla Makini, Soo-Hwan Lee, Spencer Torene, Sriharsha Hatwar, Stanislas Dehaene, Stefan Divic, Stefano Ermon, Stella Bider- man, Stephanie Lin, Stephen Prasad, Steven T. Piantadosi, Stuart M. Shieber, Summer Misherghi, Svetlana Kiritchenko, Swaroop Mishra, Tal Linzen, Tal Schuster, Tao Li, Tao Yu, Tariq Ali, Tatsu Hashimoto, Te-Lin Wu, Théo Desbordes, Theodore Rothschild, Thomas Phan, Tianle Wang, Tiberius Nkinyili, Timo Schick, Timofei Kornev, Timothy Telleen-Lawton, Titus Tunduny, Tobias Gerstenberg, Trenton Chang, Trishala Neeraj, Tushar Khot, Tyler Shultz, Uri Shaham, Vedant Misra, Vera Demberg, Victoria Nyamai, Vikas Raunak, | 2307.02053#44 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 44 | 8This is not a new task variant as compared to the setup in
§3.7, but rather a decomposition of our original results. 9Though these correlations are not necessarily causal.
100 | 30 ey S 16 a So ) 11 Accuracy (%) aS So © iS) S 012 4 8 16 Number of Demonstrations
Figure 6: Two-digit addition accuracy when given dif- ferent numbers of demonstration examples. The default- counterfactual gap reduces, but is not eliminated.
similar trend but with respect to pretraining FLOPs and termed it âinverse scaling,â also provided a memorization-based explanation: they observed that when a task contradicts with pretraining texts, similar to how our counterfactual conditions devi- ate from the default conditions in pretraining, larger LMs tend to rely on the pretraining text and, in turn, fail at the contradictory task.
# 0-Shot Chain-of-Thought Prompting | 2307.02477#44 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 44 | [39] M. Samvelyan, T. Rashid, C. Schroeder de Witt, G. Farquhar, N. Nardelli, T. G. Rudner, C.-M. Hung, P. H. Torr, J. Foerster, and S. Whiteson. The starcraft multi-agent challenge. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, pages 2186â2188, 2019.
[40] M. Savva, A. Kadian, O. Maksymets, Y. Zhao, E. Wijmans, B. Jain, J. Straub, J. Liu, V. Koltun, J. Malik, et al. Habitat: A platform for embodied ai research. In Proceedings of the IEEE/CVF international conference on computer vision, pages 9339â9347, 2019.
[41] P. Sharma, A. Torralba, and J. Andreas. Skill induction and planning with latent language. arXiv preprint arXiv:2110.01517, 2021. | 2307.02485#44 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.