doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.02477 | 6 | We propose to measure such task-level generaliz- ability by taking tasks on which LMs perform well, and altering the conditions or rules under which these tasks are performed. The general reasoning procedure for these tasks remains the same under the new conditions, but the specific input-output mappings are changed. We call the new tasks coun- terfactual tasks, as they deviate from the default, generally assumed conditions for these tasks. Fig- ure 1 shows examples: in the top left, default arith- metic is performed in base-10, while counterfacWe release our code, all synthetically generated data, and LM interactions (prompts and responses) at https://github. com/ZhaofengWu/counterfactual-evaluation.
tual arithmetic is performed in base 9. If models implement a general and transferable task-solving procedure, we expect comparable performance on counterfactual and default tasks; if they employ procedures tailored to default task conditions, we expect a drop in the counterfactual performance. | 2307.02477#6 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 6 | Planning with Large Language Models Recently, a branch of work has explored the planning capabilities of large language models. Although LLMs still face challenges when solving complex reasoning problems [6], a substantial body of work demonstrates their capacity to assist agents in planning [41, 37, 31, 13, 52, 53], especially in embodied environments[23, 4, 30, 24, 42, 29, 54, 5, 50, 40, 51, 18, 19]. For example, [16] used LLMs to build an inner monologue with environment feedback. [47] achieves better error correction during long-haul planning with LLMs. [1] focused on providing contextual grounding using pretrained behaviors to guide the generation of feasible
2
and contextually appropriate natural language actions. LLMs are also capable of initializing policy networks for agents[25], directly producing plans [44, 10], or generating policy code [26]. More recently, [32] used extended LLMs to simulate human behavior on generative agents. In contrast to most of these works, our method addresses the multi-agent cooperation scenario, which is more complex than planning for a single agent.
# 3 Building Cooperative Embodied Agents with Large Language Models
Environment Our Agent State Description __ Observation Belief | Obs. Module Module ) State Description] J--4 Planning High-level Reasoning f \ Act. Module Plan Module @ H 1 H H Obs. Act. f Reasoning Module G)/ Communication Module @! i | 2307.02485#6 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 6 | Table 1: Comparison of computation complexity among different methods. N is the sequence length and d is the hidden dimension.
In this work, we successfully scale the sequence length to 1 billion tokens. Our solution is LONGNET, which replaces the attention of vanilla Transformers with a novel component named dilated attention. The general design principle is - attention allocation decreases exponentially as the distance between tokens grows. We prove that it obtains a linear computation complexity and a logarithm dependency between tokens. This deals with the contradiction between limited attention resources and the accessibility to every token. In the implementation, LONGNET can be transformed into a dense Transformer, which seamlessly supports the off-the-shelf optimization for Transformers (e.g., kernel fusion, quantization, and distributed training). Taking advantage of the linear complexity, LONGNET can parallelize the training across nodes, breaking the constraint of both computation and memory with a distributed algorithm. This allows us to efï¬ciently scale up the sequence length to 1B
2
tokens with nearly constant runtime (see Figure 5), while vanilla Transformer suffers from quadratic complexity.
# 2 LONGNET
# 2.1 Preliminary
+
The core of Transformers [VSP to output. Given the inputs Q, K, V â RN Ãd, it computes the outputs O with 17] is self-attention, which maps a query and a set of keys and values | 2307.02486#6 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 6 | To draw a comparison between the learning curve for response tone and the acquisition of seman- tic and domain-speciï¬c knowledge, we propose a supplementary metric called ObjecQA. This auxiliary metric quantiï¬es the objectivity of a modelâs predictions, as this signal can be identi- ï¬ed within the dataset. While this feature choice is arbitrary, we aim to discover possibly more
general heuristics for better control over the train- ing phases, including identiï¬cation of "format- infusion" and "knowledge-infusion" stages.
The paper is organised as follows. In Section 2, we discuss the necessary conditions for a model to be considered an instruct model and data prepara- tion for IFS. The response tone classiï¬er training is described in Section 4. In Section 5, we present results for instruct models and compare them to baseline vanilla models in terms of instruct tone and semantic shifts. The study ends with conclu- sions and future directions proposed in Section 6.
# 2 Background and Related Work
The response tone alignment problem is a part of a broader intent alignment topic. In princi- ple, LLMs are not aligned with usersâ intents because their language modeling objective, e.g., predicting the next token of a training document, is different from the following instruction target. | 2307.03692#6 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 7 | Top-K Recommendation Rating Prediction Auser recently watched movies: Sa ~ kia Based on the watch history, please 8.0 92 98 7.5 recommend five candidate movies Based on the above rating history of that the user might be interested in this user, please rate a movie named from the following list: John Wick: Chapter 4 with a range f 1-10 points. 0e0o0 , Here is the movie rating history of a user: NN eee Please recommend some ... Conversational Recommendation Explanation Generation [User]: I recently watched a science Anew movie named The Godfather fiction movie named Interstellar Part IT is recommended to a user, ene a Bl who has recently watched movies: "7 sis or Please explain why this new movie is recommended to the user. _- ut I don't like . Could you recommend other ... . a ââ > chatePT gf crt-a Large Language Models (LLMs) for Recommender Systems OQ ems A vicua «++ â Based on the watch history, I assume this user is interested in movies of .. genres and ... actor/actress. Here are the top five candidate movies: 00000 a The movie John Wick: Chapter 4 has the similar ... to ... movie in the rating history. Thus, the rating is likely to be | 2307.02046#7 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 7 | This work overall has the following contributions:
1. Improving the problem-solving capability of VICUNA through parameter efficient fine-tuning on FLAN-MINI.
2. Introducing an instruction tuning dataset, FLAN-MINI, comprising a diverse set of tasks and templates.
# 2 Training Details
Preparing the FLAN-MINI Collection. Given the enormous size of the FLAN Collection [Longpre et al., 2023], we opted to work with a carefully selected subset that maintains a high level of task diversity while reducing the overall dataset size. In Table 1, we present the specific tasks included in our subset of FLAN, along with their respective dataset sizes. As the public release of the FLAN Collection does not include programming tasks, we augment the collection with existing code datasets. Specifically, we include CodeContests [Li et al., 2022a], APPS [Hendrycks et al., 2021a] and CodeSearchNet [Husain et al., 2019a]. Following the data processing pipeline of FLAN Collection, we sample a fixed number of examples from each dataset, where each example is randomly augmented with different prompt templates. Specifically, the examples are processed with a pool of handcrafted prompt templates and may be used as zero-shot examples or grouped together with few-shot demonstrations [Longpre et al., 2023]. | 2307.02053#7 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 7 | We design a suite of 11 counterfactual evalua- tion tasks to measure an LMâs flexibility to adapt to new task variants across multiple categories and do- mains, as summarized in Figure 1. In each, the orig- inal task under the default conditions and its coun- terfactual variants share the same reasoning proce- dure but differ in their input-output mappings. We consider traditional NLP tasks such as deductive reasoning, non-language tasks that are nonetheless commonly evaluated such as code generation, as well as non-standard tasks such as drawing and spa- tial reasoning. The latter extralinguistic tasks test whether LMs are able to learn conceptual structures that mirror the structure of the non-linguistic world, which has been suggested by recent work (Abdou et al., 2021; Ilharco et al., 2021; Patel and Pavlick, 2022; Li et al., 2023a; Bubeck et al., 2023; Søgaard, 2023; i.a.). | 2307.02477#7 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 7 | Figure 2: An overview of our framework, consisting of ï¬ve modules: observation, belief, commu- nication, reasoning, and planning, where the Communication Module and the Reasoning Module leverage Large Language Models to generate messages and decide on high-level plans. Here we also show the overall prompt design for leveraging LLMs to serve as these two modules. More design details can be found in Appendix A.
# 3.1 Problem Setup
Our problem can be deï¬ned as a decentralized partially observable Markov decision process (Dec- POMDP) augmented with communication, which can be formalized by (S, G, {Ai}, {Oi}), where n embodied intelligent agents take actions ai â Ai to navigate, interact, and communicate in a partially-observable environment given the current stepâs observation oi â Oi including the messages received for each agent i to cooperate to solve a long-horizon task with a goal g â G, normally consisting of several sub-goals g1, g2, · · · , gm. Real-life household activities are representatives of this kind of task, that require intelligent embodied agents to cooperate with other agents and humans through long-horizon planning and effective communication.
# 3.2 Our Proposed Framework | 2307.02485#7 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 7 | O = softmax(QK T )V (1)
Self-attention struggles with long sequences, due to its quadratic dependency on the sequence length. One query would attend to all keys and values, leading to computational inefï¬ciencies.
Sparse attention alleviates this issue by restricting the queryâs access to a subset of keys and values. N ÃN , which determines speciï¬c The key of sparse attention is the sparse attention pattern S â {0, 1} keys and values that the query Q can attend to.
O = softmax(QK T â 1S)V (2)
For example, the ï¬xed pattern of sparse Transformer [CGRS19] is composed of a local pattern and a strided pattern. The sequence is divided into blocks of length l. The local pattern allows one query to attend to tokens within the same block, while strided pattern allows one query to attend to the last c tokens of each block. Formally, the local pattern S(1) i = {j ⣠âj/lâ = âi/lâ}, and the strided pattern S(2) i = {j ⣠j mod l â {t, t + 1, ..., l}}.
# 2.2 Dilated Attention | 2307.02486#7 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 7 | One successful approach for aligning both objec- tives is to prompt models using zero- or n-shot techniques, where the response would look like a completion of a document containing QA (Brown et al. 2020, Radford et al. 2018).
Another approach is to instruct and tune a vanilla model on tuples of instruction and response, so the model, as part of learning, acquires skills to imitate the correct format response (Alpaca: Taori et al. 2023, Self-Instruct: Wang et al. 2023).
In the InstructGPT paper (Ouyang et al. 2022), the criterion "fails to follow the correct instruc- tion / task" was included in the list of human evaluation metadata for a reward model (RM) used in the PPO algorithm (Schulman et al. 2017) to ï¬ne-tune the SFT models to maximize their reward.
We aim to isolate and understand the tone compo- nent by evaluating each strategy as a style format- ting problem rather than using knowledge and lan- guage understanding-based metrics, e.g., MMLU (Hendrycks et al. 2021).
# Instruction Following Index
# 3.1 Motivation | 2307.03692#7 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 8 | Here are the top five candidate movies: 00000 a The movie John Wick: Chapter 4 has the similar ... to ... movie in the rating history. Thus, the rating is likely to be 9.0. â This new movie is recommended to the user because the ... features of this new movie are similar to the ... of movies that recently watched by this user. Thus, the user may want to watch the recommended new movie. [LLM]: Sure! Here are some ... recommended to yor [LLM]: My apologies! Here are ... . | 2307.02046#8 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 8 | Maintaining VICUNAâS Chatting Ability. VICUNA has demonstrated remarkable chatting abil- ity, achieving 90% of the performance of ChatGPT. This indicates its significant potential as an open-source alternative to closed-source large language models (LLMs) like ChatGPT. To ensure
2
Dataset Name Source Dataset Size Flan2021 Public Pool of Prompts Natural instructions v2 CoT Code Search Code Contest Apps FLAN FLAN FLAN FLAN Husain et al. [2019b] Li et al. [2022b] Hendrycks et al. [2021b] 388K 320K 200K 100K 100K 50K 50K GPT4-Alpaca Code-Alpaca ShareGPT GPT-4 ChatGPT ChatGPT 52K 20K 60K Total - 1.34M
Table 1: The FLAN-MINI Collection, used to train FLACUNA. | 2307.02053#8 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 8 | We evaluate the performance of GPT-4 (Ope- nAI, 2023), GPT-3.5, Claude (Anthropic, 2023), and PaLM-2 (Anil et al., 2023) on tasks under both the default and counterfactual conditions. We ob- serve above-random counterfactual performance for most tasks, indicating some degree of task gen- eralizability. However, their performance on coun- terfactual task variants consistently and substan- tially degrades relative to the performance on the default settings. This suggests that these modelsâ ability on these tasks is supported at least in part by non-transferable, default-condition-specific behav- iors rather than abstract, generalizable reasoning skills.
These results also reveal several surprising re- lations between model behavior on default and counterfactual tasks (§5), including correlations between default and counterfactual performance, varying effectiveness of zero-shot chain-of-thought prompting (Kojima et al., 2023), and interactions between task- and instance-level frequency effects. Overall, we find that small variations on the default instantiations of tasks are challenging for models, and thus the success of existing LMs should not be fully attributed to fully general capacity for the target task.
# 2 Counterfactual Tasks | 2307.02477#8 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 8 | # 3.2 Our Proposed Framework
The overall modular framework is shown in Figure 2, which consists of ï¬ve modules: observation, belief, communication, reasoning, and planning. At each step, we ï¬rst process the raw observation received with an Observation Module (3.2.1), then update the agentâs inner belief of the scene and the other agents through a Belief Module (3.2.2), this belief is then used with the previous actions and dialogues to construct the prompt for the Communication Module (3.2.3) and the Reasoning Module (3.2.4) which utilizes Large Language Models to generate messages and decide on high-level plans. Finally, a Planning Module (3.2.5) gives the primitive action to take in this step according to the high-level plan.
# 3.2.1 Observation Module
To enable embodied cooperation, it is important to perceive raw observations from the environment and extract information for downstream higher-order reasoning.
3
To achieve this we incorporate an Observation Module as the ï¬rst module to deal with the observation received from the environment and extract useful high-level information such as visual scene graphs, objects, relationships between objects, maps of the environment, and other agentsâ locations. Our observation module can deal with both symbolic observations and egocentric visual observations.
# 3.2.2 Belief Module | 2307.02485#8 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 8 | # 2.2 Dilated Attention
Figure 2 illustrates the overview of dilated attention. Dilated attention splits the input (Q, K, V ) into segments {( ÌQi, ÌKi, ÌVi)} w equally with a segment length w. Each segment is then sparsiï¬ed along the sequence dimension by selecting the rows with an interval r. The computation can be written as:
ÌQi = [Qiw, Qiw+r, Qiw+2r, ..., Q(i+1)wâ1] (3)
ÌKi = [Kiw, Kiw+r, Kiw+2r, ..., K(i+1)wâ1] (4)
ÌVi = [Viw, Viw+r, Viw+2r, ..., V(i+1)wâ1] (5)
The sparsiï¬ed segments {( ÌQi, ÌKi, ÌVi)} scattered and concatenated as the output O: N w are fed into the attention in parallel, after which are
ÌOi = softmax( ÌQi ÌK T i )ÌVi (6) | 2307.02486#8 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 8 | # Instruction Following Index
# 3.1 Motivation
An instruction following model intuitively be- haves like a conversational agent, i.e. always assume the input is an instruction, and depending on its understanding tries to provide an answer or ask follow up questions. In contrast, a model that does not follow instructions will try to predict
2
next tokens and optionally provide an answer or continue with the next instruction. The distinc- tion between two model classes becomes more clear for an instruction that is an incomplete sen- tence fragment. An instruction following model will never try to complete the instruction.
It is crucial to emphasise that the quality of re- sponses is purposely beyond the scope of this classiï¬cation. The above criteria are thus neces- sary but not sufï¬cient conditions for a chat model.
In this paper, we introduce the Instruction Fol- lowing Score (IFS), deï¬ned as a ratio of "answer- like" responses to "continuation-like" responses to a predeï¬ned set of instructions. The class of a response is determined by a binary classiï¬er (called subsequently as "response tone classiï¬er"). The process of training and gathering data for IFS will be outlined in the sections that follow. | 2307.03692#8 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02053 | 9 | Table 1: The FLAN-MINI Collection, used to train FLACUNA.
that FLACUNA retains VICUNAâs learned knowledge and chatting ability, we incorporated various ChatGPT datasets, including Alpaca [Taori et al., 2023], Code Alpaca [Chaudhary, 2023], and ShareGPT [Chiang et al., 2023], into our FLAN collection. Among these three datasets, VICUNA was originally fine-tuned using the ShareGPT dataset. The final collection was then used to train FLACUNA.
Architecture. We employed LORA in the VICUNA model for fine-tuning on the FLAN-MINI collection. We inserted the low-rank adapters on all the query and value projection layers, resulting in a total trainable parameter count of 6.55M, which is only around 0.05% of the parameter count of the original 13B VICUNA model. The maximum input sequence length was set to 1280, and efficient training was facilitated by utilizing bf16 precision.
Hyperparameter Details. FLACUNA was trained on 4ÃA6000 GPUs for 1 epoch. We use 16 gradient accumulation steps with a per-device batch size of 2, resulting in a total batch size of 128. We used 3000 warm-up steps and a learning rate of 2e-5. | 2307.02053#9 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 9 | # 2 Counterfactual Tasks
We informally conceptualize each task as a func- tion fw : X â Y that maps an input x â X under a world model w â W to an output y â Y . World models encapsulate the conditions under which function evaluation takes place. For exam- ple, in Python programming, w might specify as- sumptions of Python such as indexing and operator precedence; in arithmetic, w could represent the set of conditions required for an arithmetic oper- ation, such as the number base. We refer to the set of assumed default conditions, including but not limited to the baseâs being 10, as the default world, or wdefault. Intuitively, for any task, wdefault corresponds to the set of conditions underlying the majority of task instances in text corpora.1 | 2307.02477#9 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 9 | # 3.2.2 Belief Module
Since LLMs have no intrinsic memory of the previous observations or interactions, itâs crucial to ï¬nd a way to effectively store and update the belief of the physical scenes and the states of the other agents. Here we propose a Belief Module to keep track of the four following information.
Task Progress PT We keep track of the task progress in the belief module as Task Progress PT and update it whenever possible using processed observation information.
Ego-State PE Knowing own state is also of vital importance for embodied agents, so we gather all the information about the agentâs own states from the processed observation and stored it in the belief module as Ego-State PE.
Others-State PO Keeping track of the other agentsâ states is important for cooperating with other agents, so we maintain Others-State PO in the belief module and update it whenever a new observation of the others is possible. | 2307.02485#9 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 9 | ÌOi = softmax( ÌQi ÌK T i )ÌVi (6)
ËOi = { ÌOi,jâ£j mod r = 0; 0â£j mod r â 0} (7)
O = [ ËO0, ËO1, ..., ËO N w â1] (8)
In the implementation, the dilated attention can be transformed into dense attention between a gathering operation over the input (Q, K, V ) and a scattering operation over the output ÌOi, so it can directly reuse any optimization for vanilla attention (e.g., ï¬ash attention [DFE 22]). Dilated attention can signiï¬cantly reduce the computation cost by a factor of N w
3
Length: 4 Segment Length: 8 Segment 1 Dilated Rate: 2 Dilated | | 2307.02486#9 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 9 | In this paper, we use interchangeably "conver- sational tone" and "instruction following tone," meaning a class of "answer-like" responses. The process of ï¬ne-tuning a base model to obtain an instruct model is called "instruction tuning."
# 3.2 Dataset
The dataset for IFS is derived from a chat dataset, which originally consists of pairs (instruction, re- sponse). We will need to model inputs and out- puts for models that arenât following instructions. The main idea for data generation is to append in- struction to response and then consider different subdivisions into two phrases, as shown in Figure 1.
Different datapoint splits j R What is the capital of France? The capital of France is Paris. aa le+R What is the capital of France? The capital of France is Paris. Ip Ie What is the capital of France? The capital of France is Paris.
# struction
# vesponse
Figure 1: IFS dataset generation. Different splits deï¬ne fragments: I, R, Ip, Ic. | 2307.03692#9 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 10 | pre-trained language models (e.g., BERT) for recommender systems cannot sufficiently capture textual knowledge about users and items, demonstrating their inferior natural language understanding capability, which leads to sub- optimal prediction performance in various recommendation scenarios. Second, most existing RecSys methods have been specifically designed for their own tasks and have inadequate generalization ability to their unseen recommendation tasks. For example, a recommendation algorithm is well- trained on a user-item rating matrix for predicting moviesâ rating scores, while it is challenging for this algorithm to perform top-k movie recommendations along with certain explanations. This is due to the fact that the design of these recommendation architectures highly depends on task-specific data and domain knowledge toward specific recommendation scenarios such as top-k recommendations, rating predictions, and explainable recommendations. Third, most existing DNN-based recommendation methods can achieve promising performance on recommendation tasks needing simple decisions (e.g., rating prediction, and top- k recommendations). However, they face difficulties in supporting complex and multi-step decisions that involve multiple reasoning steps. For instance, multi-step reasoning is crucial to trip planning recommendations, where RecSys should first consider popular tourist attractions based on the destination, then arrange a suitable itinerary corresponding to the tourist attractions, and finally recommend a journal plan according to specific user preferences (e.g., cost and time for travel). | 2307.02046#10 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 10 | # 3 Evaluation Tasks and Results
# 3.1 Problem Solving Evaluation
To assess the problem-solving prowess of instructed large language models (LLMs), INSTRUCTEVAL employs a range of benchmarks encompassing real-world exams that delve into diverse topics. These benchmarks encompass complex instructions, arithmetic problems, programming challenges, and causal reasoning tasks. In order to excel in these benchmarks, models need to exhibit a profound understanding of the world, demonstrate multi-hop reasoning capabilities, showcase creativity, and employ a plethora of other cognitive skills.
World Knowledge. The Massive Multitask Language Understanding (MMLU) benchmark, intro- duced in the work by Hendrycks et al. [2021c], serves as an assessment tool to gauge the problem- solving aptitude and world knowledge of language models across various subjects. It offers evalua- tions in both zero-shot and few-shot settings, presenting a more challenging and human-like evaluation scenario. The MMLU benchmark encompasses a comprehensive range of 57 subjects spanning STEM, humanities, social sciences, and other domains. The difficulty levels of the tasks within the benchmark vary from elementary to advanced professional levels, providing a comprehensive assessment of the modelâs capabilities in problem-solving and domain understanding. | 2307.02053#10 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 10 | Traditional evaluations of machine learning mod- els assess how closely a modelâs learned hypothesis h estimates fw by independently sampling train- ing and test sets from the population distribution Dfw , and only exposing the model to the train- ing set for learning h. However, in datasets of scraped web text, these evaluations are subject to potential data contamination issues (Brown et al., 2020; Dodge et al., 2021; Magar and Schwartz, 2022; i.a.). These issues may be more severe in recent LMs: the ever-growing pretraining datasets potentially expose the models to more evaluation in- stances, and the increasing sizes of recent LMs give them more ability to memorize these instances (Car- lini et al., 2020; Magar and Schwartz, 2022).
We hence consider another dimension of gen- eralization; generalization to new task variants in counterfactual worlds wcf, instead of new inputs x. This allows us to measure the extent to which a modelâs fwdefault performance is specific to wdefault or attributable to a general implementation of the task f .2 For arithmetic, a possible wcf would be the same as wdefault but assuming a base other than base-10. We expect a model with general arith- metic ability to perform similarly in other bases. | 2307.02477#10 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 10 | Scene Memory PS The memory of what objects has been seen and where they were is vital for an embodied agent exploring a vast space. Without this, it would be impossible for the agents to make long-horizon plans and share them with each other. We keep a record of seen objects and their states as Scene Memory PS. To be noticed, this memory of scenes may not be accurate since other agents may interact with the objects and change their states without my awareness. Dealing with conï¬icts between my memory of the scene and the description of the scene from others is needed.
# 3.2.3 Communication Module
Itâs important for cooperative embodied agents to be able to communicate effectively with others. Effective communication needs to solve two problems: what to send and when to send.
We deal with the what to send problem in this module by directly using the LLMs as a Message Generator with designed prompts shown in Figure 2, constructed from the components of Instruction Head, Goal Description, State Description, Action History, and Dialogue History. To better constrain LLMsâ generated messages, we also add a note at the end of the prompt and append two seed messages at the beginning of the Dialogue History to elicit deserved effective communication behavior. The detailed prompt design is shown in Appendix A.
# 3.2.4 Reasoning Module | 2307.02485#10 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 10 | Segment Length: 16 Dilated Rate: 4
Segment Length: 4 Dilated Rate: 1
Figure 2: Building blocks of dilated attention used in LONGNET. It consists of a series of attention patterns for modeling both short-range and long-range dependency. The number of attention patterns can be extended according to the sequence length.
In practice, the segment size w trades the globality of attention for efï¬ciency, while the dilation with a size r reduces the computation cost by approximating the attention matrix. To capture both long-range and short-range information efï¬ciently, we implement a mixture of dilated attentions with k: different segment sizes and dilation rates {ri, wi}
O = k â i=1 αiOâ£ri,wi (9)
αi = si j sj â (10)
where si denotes the denominator of the attention softmax for Oâ£ri,wi . Note that the computations k are in parallel because there is no computation dependency among them. Experiments for {Oâ£ri,wi} show that dynamic weights calculated by the denominator of the attention softmax are better than learnable ï¬xed weights. For a query attends to keys in different dilated attentions, our method to mix dilated attentions is equivalent to gather keys in different parts and calculate softmax together. | 2307.02486#10 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 10 | # struction
# vesponse
Figure 1: IFS dataset generation. Different splits deï¬ne fragments: I, R, Ip, Ic.
If the cut regenerates (instruction, response) we get the ideal input and output for a chat model. If we shift the split to the right or to the left, we can obtain incomplete sentences (fragmented) which represent unï¬nished instructions or continuation of instructions followed by responses. To summa- rize, we can get:
⢠Inference inputs:
I - Instruction
Ip - Partial (fragmented) instruc- tion
⢠Inference outputs:
Ic - Continuation of the instruction
R - Response
In fact, combinations of those 4 parts gives all possible pairs of inputs and outputs for vanilla and chat models. In the table below we recom- bine the parts and give and assign them a binary score depending whether the model responds like a chat model.
(I, R) The response R for instruction I is conversational. A model whose all re- sponses would resemble the above form would be an instruction following, so the response has label 1.
(Ip,R) The response R for partial instruction Ip is also conversational, but in this case the model has not enough context to pro- vide any answer except requesting for more information. This response is also labeled as 1. | 2307.03692#10 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 11 | parameters have generated large impacts on various research fields such as Natural Language Processing (NLP) [15], Computer Vision [16], and Molecule Discovery [17]. Tech- nically, most existing LLMs are transformer-based models pre-trained on a vast amount of textual data from diverse sources, such as articles, books, websites, and other publicly available written materials. As the parameter size of LLMs continues to scale up with a larger training corpus, recent studies indicated that LLMs can lead to the emergence of remarkable capabilities [18], [19]. More specifically, LLMs have demonstrated the unprecedently powerful abilities of their fundamental responsibilities in language understanding and generation. These improvements enable LLMs to better comprehend human intentions and generate language responses that are more human-like in nature. Moreover, recent studies indicated that LLMs exhibit impressive generalization and reasoning capabilities, making LLMs better generalize to a variety of unseen tasks and domains. To be specific, instead of requiring extensive fine-tuning on each specific task, LLMs can apply their learned knowledge and reasoning skills to fit new tasks simply by providing appropriate instructions or a few task demonstrations. Advanced techniques such as | 2307.02046#11 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 11 | Complex Instructions. The subset known as BIG-Bench Hard (BBH) comprises 23 highly demand- ing tasks carefully selected from the BIG-Bench benchmark [Srivastava et al., 2022] to specifically target tasks that are considered to surpass the current capabilities of language models [Suzgun et al., 2022]. BBH presents models with intricate instructions that require advanced skills in navigation, logical deduction, and fallacy detection.
3
Comprehension and Arithmetic. Discrete Reasoning Over Paragraphs (DROP) is a reading comprehension task with a mathematical focus. It challenges systems to engage in discrete reasoning by analyzing passages extracted from Wikipedia articles. In order to excel in the DROP task, a system needs to adeptly navigate references within a question and identify the appropriate sections of the provided passage. Additionally, the system must demonstrate proficiency in performing discrete operations like addition, counting, or sorting. | 2307.02053#11 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 11 | 1This data-generating process can be described by the fol- lowing generative model, P (y | x, w)P (x | w)P (w). From the perspective of causal inference, our counterfactual frame- work can be informally seen as performing a do-operator on this graph (Pearl, 2009).
>This setup is reminiscent of intensional models of natural language semantics (Heim and Kratzer, 1998, §12; Von Fintel and Heim, 2011), where f is analogous to the denotation function [-], x to its input, and y to its output. By default, the denotation is evaluated under the real world, extensionally, but when a different possible world is specified instead, we expect a competent system to adjust the evaluation accordingly.
We emphasize that our goal is not to find coun- terfactual world models that are completely outside the realm of human experience. Base-9 addition, for example, is not a novel concept. Nor do we aim to guarantee that counterfactual world models are unobserved in a pretraining corpus. Instead, counterfactuals are simply defined as variations on the default conditions for a task. | 2307.02477#11 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 11 | # 3.2.4 Reasoning Module
With all the information gathered and provided by previous modules, cooperative embodied agents need to synthesize and reason over the current state, the belief of the others and the scene, the goals, the actions Iâve taken, and messages Iâve received to come up with a plan of what to do next. A strong reasoning module is required to leverage all the information effectively.
While designing such a module from scratch is nearly infeasible, we utilize powerful LLMs directly as the Reasoning Module with designed prompts similar to the Communication Module to reason over all the information and generate a high-level plan. Speciï¬cally, we modify the Instruction Head and compile an Action List of all available actions for the LLMs to make the choice, which formalization makes it easier for the LLMs to make an executable plan without any few-shot demonstrations.
We also use the zero-shot chain-of-thought prompting technique introduced by [22] to encourage the LLM to carry out more reasoning before giving the ï¬nal answer.
# 3.2.5 Planning Module | 2307.02485#11 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 11 | Intuitively, the local attention should be precisely computed, while the global attention can be approximate. Therefore, we set a larger wi with a bigger ri. Moreover, we gradually increase the wi for each attention until it reaches the maximum length N or the number of attention patterns k:
w = {w0, w1, w2, ..., N } k (wi < wi+1 < N ) (11)
4
1st head 2"4 head 34 head 4 head Segment Length: 8 Dilated Rate: 2 Heads: 4
Figure 3: Dilated attention with multiple heads. The attention patterns differ among heads by shifting the position successively.
r = {1, r1, r2, ..., rk} k (1 < ri < ri+1) (12)
In practice, we set w and r to geometric sequences for an exponential attentive ï¬eld.
# 2.3 Multi-Head Dilated Attention
As shown in Figure 3, we differ in the computation among different heads by sparsifying different parts of the query-key-value pairs. Speciï¬cally, for the j-th head, we have an offset sj = j mod r when selecting the (Q, K, V ): | 2307.02486#11 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 11 | (Ip,Ic) The model completes the fragmented in- struction (executing next word predic- tion task). The pair does not look like a conversation, so the label is 0.
(I , Ic) The model generates next instructions (similarly to next word prediction task again), which gives the response label 0.
(Ip,Ic+R) In this case, the model completes the instruction then replies (executing next word prediction task too). Although au- thors might imagine people attempting have such dialogue, we treat instruction completion as a sign of failed conversa- tion. Label is 0.
(I,Ic+R) The model generates another instruction then replies to its generation. The dia- logue fails giving the response label 0.
Examples for each case are shown in Table 1.
3 | 2307.03692#11 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 12 | LLMs can apply their learned knowledge and reasoning skills to fit new tasks simply by providing appropriate instructions or a few task demonstrations. Advanced techniques such as in-context learning can further enhance such generalization performance of LLMs without being fine-tuned on specific downstream tasks [19]. In addition, empowered by prompting strategies such as chain- of-thought, LLMs can generate the outputs with step-by- step reasoning in complicated decision-making processes. Hence, given their powerful abilities, LLMs demonstrate great potential to revolutionize recommender systems. | 2307.02046#12 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 12 | Programming. HumanEval serves as a problem-solving benchmark specifically designed for assessing the performance of large language models that are trained on code [Chen et al., 2021]. The benchmark comprises 164 unique programming problems, encompassing areas such as language comprehension, algorithms, and basic mathematics. Some of the problems included in HumanEval are similar in nature to straightforward software interview questions. In the evaluation process, models are assessed based on the functional correctness of the code programs they generate, with the criteria for correctness determined by the given docstrings. HumanEval provides a comprehensive evaluation framework for assessing the problem-solving capabilities of language models in a code-centric context.
Causality. The Counterfactual Reasoning Assessment (CRASS) benchmark is a novel dataset and evaluation tool developed specifically to assess the causal reasoning abilities of large language models. By employing counterfactual scenarios, CRASS tests the modelâs capability to identify and select appropriate causal explanations. This benchmark provides a unique and rigorous evaluation framework to gauge the causal reasoning capabilities of language models.
# 3.2 Alignment to Human Values | 2307.02053#12 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 12 | Concretely, we assess an LMâs task performance with 0-shot prompting. We specify the task f , the test instance x, and the world model w in a prompt, parse the LMâs output, and compare it to the ground-truth label. We denote the LMâs imple- mentation of fw for a given instance x to be,
h(f,w,v) = argmax Pim (y/'| prompt -(f, 2), y! prompt,,(w)),
where the arg max is computed with an ap- proximate decoding procedure and promptf and promptw are prompt templates that describe tasks and world models respectively. For each task, we devise one or more wcf that deviate from the de- fault world (i.e., the default task conditions). We evaluate both h(f, wdefault, x) and h(f, wcf, x) via task-specific metrics. If we control fw(x) to be similarly hard for both wdefault and wcf, we can attribute the performance difference to an LM over- fitting to the default instantiation of the task.
# 2.1 Counterfactual Comprehension Check | 2307.02477#12 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 12 | # 3.2.5 Planning Module
As shown in [9], solving challenging embodied tasks requires modular methods to tackle the complex- ity of tasks. As also discussed in [49], we found that while Large Language Models were effective at making high-level plans, they were poor at making low-level controls. Thus, to enable effective embodied communication, we designed a Planning Module that can generate robust low-level controls
4
according to a given high-level plan, allowing the reasoning module to focus more on solving the overall task with LLMsâ rich world knowledge and strong reasoning ability. Practically, this way can also reduce the needed number of API requests and is time-saving and economical.
We implement the Planning Module with a heuristic-designed low-level planner to robustly carry out primitive actions according to the high-level plan generated from the Reasoning Module.
# 4 Experiments
We ï¬rst introduce the two embodied environments we evaluate our framework on in section 4.1, then discuss the performance of our designed framework when cooperating with AI agents in section 4.2.1, showing they are better cooperators, and they can earn more trust and cooperate better with Humans in section 4.2.2. In section 4.3, we analyze the effectiveness of our different modules.
# 4.1 Experimental Setup
# 4.1.1 Communicative Watch-And-Help | 2307.02485#12 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 12 | ÌQi = [Qiw+sj , Qiw+sj +r, Qiw+sj +2r, ..., Q(i+1)w+sj â1] (13)
# ÌKi = [Kiw+sj , Kiw+sj +r, Kiw+sj +2r, ..., K(i+1)w+sj â1]
(14)
# ÌVi = [Viw+sj , Viw+sj +r, Viw+sj +2r, ..., V(i+1)w+sj â1]
(15)
Following the vanilla multi-head attention, the outputs of different heads are concatenated into a ï¬nal output. The rest of the computation remains the same as the single-head counterpart in Section 2.2.
# 2.4 Computational Complexity and Token Dependency
Given dilated attention with a segment size and dilation rate of (r, w), each query-key-value pair is sparsiï¬ed from (Q, K, V ) â RN Ãd to (Q, K, V ) â R w Ãd, so the ï¬ops of the attention computation are estimated as:
F LOP s = 2N w ( w r ) 2d = 2N wd r2 (16) | 2307.02486#12 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 12 | Case Example chat? (I, R) I: What if people had 40 legs? R: If people had 40 legs, theyâd be human cen- tipedes on the go, setting world records in races and always winning at Twister! 1 (Ip,R) Ip: What if R: It seems like your question is incomplete. Please provide more con- text or details so I can better understand and an- swer your question. 1 (Ip,Ic) (I , Ic) Ip: What if Ic: people had 40 legs? I: What if people had 40 legs? Ic: What if people had 3 eyes? 0 0 (Ip,Ic + R) Ip: What if Ic + R: people had 40 If people had legs? 40 legs, theyâd be hu- man centipedes on the go, setting world records in races and always win- ning at Twister! 0 (I,Ic + R) I: What if people had 40 legs? Ic + R: What if peo- ple had 3 eyes? If people had 3 eyes, sun- glasses would come in trendy trinocular styles and "Iâve got my eye on you" would be a whole new level of surveil- lance. 0 the set | 2307.03692#12 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 13 | language processing techniques, Large Language Models (LLMs) with billion
Very recently, initial efforts have been made to explore
2
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
the potential of LLMs as a promising technique for the next- generation RecSys. For example, Chat-Rec [3] is proposed to enhance the recommendation accuracy and explainability by leveraging ChatGPT to interact with users through conversations and then refine the candidate sets generated by traditional RecSys for movie recommendations. Zhang et al. [20] employ T5 as LLM-based RecSys, which enables users to deliver their explicit preferences and intents in natural language as RecSys inputs, demonstrating better recommendation performance than merely based on user- item interactions. Figure 1 demonstrates some examples of applying LLMs for various movie recommendation tasks, including top-K recommendation, rating prediction, conver- sational recommendation, and explanation generation. Due to their rapid evolution, it is imperative to comprehensively review recent advances and challenges of LLMs-empowered recommender systems. | 2307.02046#13 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 13 | # 3.2 Alignment to Human Values
Noting the importance of aligning LLMs to human values, INSTRUCTEVAL incorporates the Helpful, Honest, and Harmless (HHH) benchmark [Askell et al., 2021]. The benchmark showcases engaging dialogues between humans and conversational assistants, challenging the model to discern and provide the most appropriate response. It encompasses a diverse array of 61 honesty-related, 59 helpfulness-related, and 58 harmlessness-related samples, along with 43 unique instances falling within the "other" category. The inclusion of the "other" category accounts for examples that embody values not explicitly covered by honesty, helpfulness, or harmlessness.
# 3.3 Writing Experiments
For the writing experiment, we utilized the IMPACT dataset, which is readily available in IN- STRUCTEVAL. This comprehensive dataset consists of 50 prompts across distinct categories, namely informative, professional, argumentative, and creative. Following that, ChatGPT was assigned the responsibility of scoring the modelsâ responses in terms of relevance (Rel.) and coherence (Coh.) on a scale of 1 to 5. For more comprehensive information regarding this evaluation, we refer readers to Chia et al. [2023].
# 3.4 Results
Comparative Baselines. As baselines, we selected VICUNA [Zheng et al., 2023] and STABLEVI- CUNA1. | 2307.02053#13 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 13 | # 2.1 Counterfactual Comprehension Check
One potential confounder is that an LM may be failing at a particular counterfactual task by failing to understand the prompt component that specifies the counterfactual conditions, i.e., promptw(wcf). That is, an LM might still be reasoning in wdefault and completely ignore the instructions. While this would still be a failure of the LM, it does not nec- essarily represent a failure to perform the counter- factual task variant. We control for this by design- ing task-specific counterfactual comprehension checks (CCCs) that test an LMâs surface under- standing of the specified counterfactual world.
For each (default, counterfactual) task pair, we introduce another control task gw with input xâ² and output yâ² that is much simpler than fw but still al- lows for the discrimination of wdefault from wcf (i.e., gwcf(xâ²) ̸= gwdefault(xâ²)). A high performance of PLM(yâ² | promptg(g, xâ²), promptw(wcf)) would indicate that promptw is effective at making the LM perform a task in wcf. In the arithmetic ex- ample, for a base-9 counterfactual world, we use | 2307.02477#13 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 13 | # 4.1 Experimental Setup
# 4.1.1 Communicative Watch-And-Help
Communicative Watch-And-Help (C-WAH) is an embodied multi-agent cooperation benchmark, extended from the existing Watch-And-Help Challenge [35], where we focus more on cooperation ability. To achieve this, we support communication between agents and remove the Watch stage so both agents have common goals. The challenge is built on a realistic multi-agent simulation platform, VirtualHome-Social[34, 35]. We conduct experiments under both symbolic observations and ego-centric visual observations. The task is deï¬ned as ï¬ve types of common household activities: Prepare afternoon tea, Wash dishes, Prepare a meal, Put groceries, and Set up a dinner table, and represented as various predicates with counts to be satisï¬ed. The number of total goal objects is within 3 to 5.
Setup We sampled 2 tasks from each of the ï¬ve types of activities to construct a test set of 10 episodes. An episode is terminated if all the predicates in the goal are satisï¬ed or the maximum number of steps (250) is reached. | 2307.02485#13 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 13 | F LOP s = 2N w ( w r ) 2d = 2N wd r2 (16)
We further extend it to dilated attention with multiple segment sizes and dilation rates. The ï¬ops can be written as:
F LOP s = 2N d k â i=1 wi r2 i (17)
With the segment sizes and dilation rates in Equation (11) and Equation (12), the ï¬ops are given by
F LOP s = 2w0N d kâ1 â i=0 1 αi ⤠2α α â 1 w0N d (α > 1) (18)
where w0 is a predeï¬ned constant and α is the common ratio for geometric sequences w and r. Therefore, the computation complexity of dilated attention is approximate to O(N d).
5
GPU 1 GPU 2 01 02 " 4 All gather 4 a Ky VY K, V2 Q + Sparsify + a Ky Y K, V2 Q2 + Project 4
Figure 4: Distributed training of LONGNET on two GPU devices. It parallelizes the training by partitioning the sequence dimension. The computation and communication costs are nearly constant as the number of devices grows.
Moreover, the information of each tokens can be propagated to a maximum distance of D: | 2307.02486#13 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 13 | would come in trendy trinocular styles and "Iâve got my eye on you" would be a whole new level of surveil- lance. 0 the set of responses is used to generate data for the binary classiï¬er. Figure 2 shows how chat data is split and used for in our experiment. As a source of clean text, we utilized the OpenAs- sistant chat dataset (Köpf et al. 2023). To control the context of the conversation, we only consid- ered the ï¬rst instruction and its corresponding response from each dialogue. 3.2.1 Instructions dataset In the instruction dataset, data points consist of instructions sourced from OpenAssistant data, ei- ther unmodiï¬ed (I) or fragmented (Ip). We ob- tained a total of 7340 examples, with an approxi- mate 50% split between fragments and complete sentences. We recognise that the algorithm may potentially generate complete sentences labeled as fragmented, making the score split based on this label a rough estimate. Table 2 shows examples of full and partial instruc- tions. Instruction Label What is the difference between HTML What is the difference between HTML and JavaScript? Who wears | 2307.03692#13 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 14 | Therefore, in this survey, we provide a comprehensive overview of LLMs for recommender systems from the paradigms in terms of pre-training, fine-tuning, and prompting. The remaining part of this survey is organized as follows. First, we review the related works on RecSys and LLMs, and their combinations in Section 2. Then, two types of LLM-empowered RecSys that take advantage of LLMs to learn the representation of users and items are illustrated in Section 3, which are ID-based RecSys and textual side information-enhanced RecSys. Subsequently, we summarize the techniques for adopting LLMs to RecSys in terms of the pre-training & fine-tuning paradigm and the prompting paradigm in Sections 4 and 5, respectively. Finally, some chal- lenges and potential future directions for LLM-empowered RecSys are discussed in Section 6.
Concurrent to our survey, Liu et al. [21] review the training strategies and learning objectives of the language modeling paradigm adaptations for recommender systems. Wu et al. [22] summarize the LLMs for recommender systems from discriminative and generative perspectives. Lin et al. [23] introduce two orthogonal perspectives: where and how to adapt LLMs in recommender systems.
# 2 RELATED WORK
In this section, we briefly review some related work on recommender systems and LLMs techniques. | 2307.02046#14 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 14 | # 3.4 Results
Comparative Baselines. As baselines, we selected VICUNA [Zheng et al., 2023] and STABLEVI- CUNA1.
Few-shot Problem-solving. We present the results of FLACUNA on five datasets (see Table 2) from the INSTRUCTEVAL benchmark, focusing on problem-solving tasks. In 4 out of 5 tasks, FLACUNA outperformed VICUNA, showing an average performance improvement of 5.6 points over the LLaMA backbone. However, it performed slightly worse on code-related problem-solving tasks in the HumanEval dataset, with a margin of 0.6 points. Overall, the improvement in FLACUNA compared to VICUNA is 5.1 points averaged over the five tasks.
Out of the five problem-solving datasets, one of them, DROP, is categorized as a held-in dataset. It is a part of our FLAN collection and was utilized for training FLACUNA. As a result, we observed a significant performance boost of 11 points compared to VICUNA. The remaining datasets are considered held out.
# 1https://huggingface.co/CarperAI/stable-vicuna-13b-delta
4 | 2307.02053#14 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 14 | the same promptw(base-9) to specify the coun- terfactual world, and check that it facilitates an understanding of w = base-9 by asking what the next integer after xâ² is. If, for example, it consis- tently carries over digits greater than 8 and does not carry over otherwise, this would show the effective- ness of promptw(base-9). Our CCC designs are heuristic: as with control tasks in the probing litera- ture (Hewitt and Liang, 2019), we rely on intuition to craft a gw that is âsimplerâ than fw.3
# 3 Tasks
In this section, we give a quick overview of the tasks we consider. See §A for the full description of each task and §B for all the prompts used.
# 3.1 Arithmetic | 2307.02477#14 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 14 | Metrics We evaluate the performance by two metrics: Average Steps L taken to finish the task and Efficiency Improvement (EI) calculating the efficiency improvements of cooperating with other agents as ye, (Lsingte,i â Lmutti,i)/Lsingle,i, Where Lsingie,; denotes the average steps for a single agent to finish episode 7, and Lmxwiti,; denotes the average steps for multi-agents to finish episode 7.
MCTS-based Hierarchical Planner We adopt the strongest baseline from the original Watch-And- Help Challenge, which is a Hierarchical Planner with a high-level planner based on MCTS and a low-level planner based on regression planning (RP).
# 4.1.2 ThreeDWorld Multi-Agent Transport
We extend the ThreeDWorld Transport Challenge [12] into a multi-agent setting with more types of objects and containers, more realistic objects placements, and support communication between agents, named ThreeDWorld Multi-Agent Transport (TDW-MAT), built on top of the TDW platform [11], which is a general-purpose virtual world simulation platform. The agents are tasked to transport as many target objects as possible to the goal position with the help of containers as tools, without which the agent can transport only two objects at a time. The agents have the same ego-centric visual observation and action space as before with a new communication action added. | 2307.02485#14 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 14 | Moreover, the information of each tokens can be propagated to a maximum distance of D:
D = lâ1 â i=0 wi = w0 lâ1 â i=0 αi â w0 α â 1 αl (19)
where l is the length of the propagated path. Therefore, the maximum path length of a sequence with N tokens can be estimated as:
L â logα N (α â 1) w0 (α > 1) (20)
This proves that the token dependency is approximate to O(log N ).
# 3 LONGNET as a Distributed Trainer: Scaling up to 1B Tokens
Although the computation complexity of dilated attention has been greatly reduced to O(N d), it is infeasible to scale the sequence length to the million level on a single GPU device due to the computation and memory constraints. There are some distributed training algorithms for large-scale 22], and model training, such as model parallelism [SPP pipeline parallelism [HCB 19]. However, they are insufï¬cient for LONGNET especially when the sequence dimension is extremely large.
# 3.1 Distributed Algorithm | 2307.02486#14 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.02046 | 15 | # 2 RELATED WORK
In this section, we briefly review some related work on recommender systems and LLMs techniques.
# 2.1 Recommender Systems (RecSys)
To address the information overload problem, recommender systems have emerged as a crucial tool in various on- line applications by providing personalized content and services to individual users [24], [25]. Typically, most existing recommendation approaches can fall into two main categories: Collaborative Filtering (CF) and Content- based recommendation. As the most common technique, CF-based recommendation methods aim to find similar behavior patterns of users to predict the likelihood of future interactions [12], which can be achieved by utilizing the historical interaction behaviors between users and items, such as purchase history or rating data. For example, as one of the most popular CF methods, Matrix Factorization (MF) is introduced to learn representations of users and items by | 2307.02046#15 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 15 | Model Size MMLU (5-shot) BBH (3-shot) DROPâ (3-shot) CRASS (3-shot) HumanEval (0-shot) Perf. â Perf. â Perf. â Perf. â Perf. â Perf. GPT-4 ChatGPT - - 86.4 70.0 - - - 49.5 - - 80.9 64.1 - - - 90.5 - - 67.0 48.1 - - - 64.5 Flan-UL2 Alpaca-Lora OpenAssistant OPT-IML 20B 55.0 30B 58.4 30B 56.9 30B 38.6 - +0.6 -0.9 +11.3 44.7 41.3 39.2 31.3 - +2.0 -0.1 +3.0 64.3 45.1 46.0 47.5 - -0.3 +0.6 +28.0 94.2 79.2 67.2 67.2 - +10.6 +1.4 +32.5 0.0 18.9 23.1 9.1 - +4.9 +9.1 +7.9 51.6 48.6 46.5 38.7 Flan-T5 Flan-Alpaca Dolly V2 11B | 2307.02053#15 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 15 | # 3.1 Arithmetic
Modern LMs have been shown to possess basic numerical reasoning abilities (Lewkowycz et al., 2022), with even GPT-3 reporting near-perfect ac- curacy for two-digit additions (Brown et al., 2020). On the other hand, Razeghi et al. (2022) find that LMs perform significantly better on operations in- volving numbers that occur more frequently in the pretraining data, and Li et al. (2023d) show that symbol replacement affects the mathematical abil- ity of BERT (Devlin et al., 2019)-like models; both findings point to overfitting and memorization ef- fects. We consider the same two-digit addition task, the simplest arithmetic task in Brown et al. (2020), but inspect a modelâs accuracy in different bases. We use base-8, 9, 11, and 16 as the counterfactual setup which are natural generalizations to base-10 arithmetic. These bases were chosen to control for task difficulty (see §7.1 for a discussion) and also to test for how relatively uncommon (9 & 11) and common (8 & 16) bases affect performance (see §5.1 for an analysis). To ensure the model under- stands the different bases, the CCC evaluates the successor relation under each base.
# 3.2 Programming | 2307.02477#15 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 15 | Setup We selected 6 scenes from the TDW-House dataset and sampled 2 types of tasks in each of the scenes, making a test set of 12 episodes. Every scene has 6 to 8 rooms, 10 objects, and 4 containers. An episode is terminated if all the target objects have been transported to the goal position or the maximum number of frames (3000) is reached.
Metrics We use the Transport Rate (TR) as the evaluation metric, which is calculated as the fraction of the target objects successfully transported to the goal position, and calculate the Efficiency Improvements (EI) similar to the previous as TN (TRmutti,i â TRsingle,i)/TRmutti,i, Where the TRsingle,i denotes the single agentâs transport rate for episode i, and TRmwizi,; denotes the multiple agentâs transport rate for episode 7.
5 | 2307.02485#15 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 15 | # 3.1 Distributed Algorithm
We take advantage of the linear computation complexity of LONGNET for the distributed training of sequence dimension. Without loss of generality, Figure 4 presents our distributed algorithm on two GPUs, which can be further scaled to an arbitrary number of devices. We start by splitting the input sequence along the sequence dimension. Each sequence is put on one device separately:
X = [X1, X2]
X = [X1, X2] (21)
6
=== Dilated attention w/ FlashAttention ==== Vanilla attention w/ FlashAttention 5000 4000 Runtime (ms) 3 S s 2000 1000 8K 16K 32K 64K128K 512K =. 8M 32M 128M 1B Sequence Length
Figure 5: Runtime of our dilated attention and vanilla attention. Both are equipped with FlashAtten- tion [DFE
Then, they are projected into queries, keys, and values on the two devices:
[Q1, K1, V1] = [WQ, WK, WV ]X1, [Q2, K2, V2] = [WQ, WK, WV ]X2 (22) | 2307.02486#15 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 15 | Table 1: Examples of possible combinations of fragments I, R, Ip, Ic. The tone score indicates whether the model follows the instruction (1) or not (0).
The set of responses represents the right side of Fig. 1, i.e., original responses or responses shifted to the right. The collected classes are:
Label 0 : Ic, Ic+R
In summary, among the six potential combina- tions, only two instruct model cases exist: (Ip, R) and (I, R). With this classiï¬cation established, we can now create the set of instructions and cor- responding model responses.
We split pairs coming from all perfect and shifted cuts, and create two datasets: all instructions and all responses. The set of instructions is used to generate data used for prompting models, while
Label 1 : R
We drop the ï¬ne-grained classiï¬cation of re- sponses and assign them only to "answer-like" (label !) or "continuation-like" (label 0). These samples are later used to train the binary classiï¬er. Table 3 shows examples of responses and their labels.
4 | 2307.03692#15 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 16 | using pure user-item interactions [7], [26]. In other words, unique identities of users and items (i.e., discrete IDs) are encoded to continue embedding vectors so that the matching score can be calculated easily for recommendations [27], [28]. Content-based recommendation methods generally take advantage of additional knowledge about users or items, such as user demographics or item descriptions, to enhance user and item representations for improving recommendation performance [29]. Note that as textual information is one of the most available contents for users and items, we mainly focus on text as content in this survey. Due to the remarkable representation learning capabili- ties, deep learning techniques have been effectively applied to develop recommender systems [5], [25]. For instance, NeuMF is proposed to model non-linear interactions between users and items by replacing the general inner product with DNNs [30]. Considering that data in RecSys can be naturally represented as graph-structured data, GNN techniques are treated as the main deep learning approaches for learning meaningful representations of nodes (i.e., users and items) via message propagation strategies for recommender systems [1], [31]â[33]. In order to integrate textual knowledge about users | 2307.02046#16 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 16 | +9.1 +7.9 51.6 48.6 46.5 38.7 Flan-T5 Flan-Alpaca Dolly V2 11B 54.5 11B 50.9 12B 25.6 +29.3 +25.7 -1.3 43.9 23.3 29.7 +13.6 -7.0 +0.2 67.2 62.3 16.6 +49.7 +44.8 -0.5 88.3 90.2 35.8 +54.7 +56.6 +1.1 0.0 0.0 8.5 +0.0 +0.0 -0.6 50.8 45.3 23.2 Flan-T5 ChatGLM Mosaic-Chat 3B 6B 7B 49.2 36.1 37.1 +25.9 - +1.9 40.2 31.3 32.0 +15.9 - +1.1 56.3 44.2 20.2 +43.7 - -7.4 91.2 51.1 47.5 +60.2 - +13.6 0.0 3.1 17.7 +0.0 - +7.4 47.4 33.2 30.9 STABLEVICUNA VICUNA FLACUNA 13B 49.2 13B 50.6 13B | 2307.02053#16 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 16 | # 3.2 Programming
Even without explicit pretraining on large amounts of code, LMs have been found to possess decent coding ability (Brown et al., 2020). The inclusion of large code corpora in LM pretraining (Gao et al., 2021; Chowdhery et al., 2022; Touvron et al., 2023;
3In this formulation, LM queries for CCC are separate from the main task queries. For some tasks, it is more natural to query about the task and CCC jointly in the same prompt, i.e., PLM(y, yâ²| promptf (f, x), promptg(g, xâ²), promptw(wcf)). We use this formulation instead for those tasks. | 2307.02477#16 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 16 | 5
Rule-based Hierarchical Planner We adopt the strong performing baseline from the original chal- lenge, which is a Rule-based Hierarchical Planner with Frontier Exploration strategy, consisting of a rule-based high-level planner which selects one of the high-level plans from Exploration, Pick up an object, Pick up a container, and Place according to some human-deï¬ned rules and an A-star based planner to navigate with occupancy map and semantic map obtain and updated from the visual observation. The Frontier exploration strategy randomly samples a way-point from an unexplored area as a sub-goal for exploration.
Implementation Details. We instantiate our framework with the recent LLM GPT-4. We access GPT-4 from the OpenAI API1 and use the parameter of temperature 0.7, top-p 1, and max tokens 256. We show an example prompt for the Reasoning Module for both environments in Appendix C.
# 4.2 Results
C-WAH TDW-MAT Symbolic Obs Visual Obs Average Steps EI Average Steps EI Transport Rate EI HP 111 / 141 / 0.53 / HP + HP HP + LLM LLM + LLM 75 59 57 33% 45% 49% 103 94 92 26% 34% 34% 0.79 0.86 0.86 34% 38% 39% | 2307.02485#16 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 16 | For the segment length wi ⤠l (where l is the sequence length on the local device), we compute the attention locally with Equation (3) to Equation (8). For the segment length wi > l, the keys and values are distributed across different devices. Therefore, we collect the key-value pairs before computing the attention. We use Equation (3) to Equation (5) to sparsify the {Q, K, V } into { ÌQ, ÌK, ÌV }. An all-gather operation is implemented to collect the key-value pairs:
ÌK = [ÌK1, ÌK2], ÌV = [ ÌV1, ÌV2] (23)
Note that the all-gather operation in the backward becomes a reduce-scatter operation. Different from vanilla attention, both sizes of ÌKi and ÌVi are independent of the sequence length N , making the communication cost constant. Finally, we compute the cross-attention with the local queries ÌQi and the global key-value pairs { ÌK, ÌV }. The formulation is written as: | 2307.02486#16 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 16 | 4
it ï¬y so fast? The fastest ï¬ying bird is the peregrine falcon. agent? Iâm not a FBI agent. 0 0 When onions are cut, they release a chemical called sulfuric acid. James Madison was the primary au- thor of the Constitution and the Bill of Rights. 1 1
Table 3: Examples of responses and their cate- gories.
# 4 Binary classiï¬er and Instruction Following Score
The binary classiï¬er for tone response classiï¬ca- tion has been chosen as the best binary classiï¬er, trained on the set of responses using Hugging- face AutoTrain (Huggingface 2023a). Since the dataset consisted of a roughly equal split of neg- ative and positive samples, we have chosen ac- curacy as the comparison metric. The winning architecture was BertForSequenceClassiï¬cation, and the ï¬nal classiï¬er metrics (as reported by AutoTrain) are presented in Table 4.
Metric Value Accuracy Precision Recall 0.970 0.983 0.925
Table 4: Validation metrics | 2307.03692#16 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 17 | users and items) via message propagation strategies for recommender systems [1], [31]â[33]. In order to integrate textual knowledge about users and items, DeepCoNN is developed to use CNNs to encode usersâ reviews written for items with two parallel neural networks so as to contribute to rating predictions in recommender systems [8]. Meanwhile, a neural attention framework NARRE is introduced to simultaneously predict usersâ ratings towards items and generate review-level explanations for the predictions [34]. | 2307.02046#17 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02477 | 17 | i.a.) further improves this capability in recent LMs, with ChatGPT sometimes outperforming state-of- the-art approaches for bug fixing (Sobania et al., 2023). Nevertheless, Miceli-Barone et al. (2023) show that GPT-3 and related models are fragile un- der identifier swaps in programs, suggesting that these models may only possess a shallow under- standing of code. Here, we inspect an LMâs pro- gramming ability through a deeper counterfactual perturbation: contrary to the traditional 0-based indexing in Python, we instruct the LM to evaluate or generate programs under a fictional language, ThonPy, that uses 1-based indexing but is otherwise identical to Python. 1-based indexing is a com- mon assumption for other programming languages such as MATLAB and R and hence provides a fair testbed. We evaluate the LMâs performance using the HumanEval dataset (Chen et al., 2021). The CCC here involves the same program execution task but on much simpler inputs, such as simple list indexing, that do not involve deeper reasoning.
# 3.3 Basic Syntactic Reasoning | 2307.02477#17 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 17 | Table 1: Main results. We report the mean results here over 5 runs except for LLM, which takes only one run due to cost constraints. The best results are in bold. The best performance is achieved when cooperating with LLM agents.
# 4.2.1 Collaborating with AI Agents
Quantitative results As shown in Table 1, on C-WAH, compared with the MCTS-based HP agent doing the task alone, cooperating with another MCTS-based HP agent provides an efï¬ciency im- provement of 33% and 26% under symbolic and visual observation, while cooperating with the LLM agent boosts the speed-up to 45% and 34% respectively, even without any knowledge of the inner working mechanism of the others, which shows LLMs can reason about the other agentâs state well without hand-designed heuristics. Whatâs more, when two LLM agents cooperate together, they can achieve even better performance. From TDW-MAT, we can observe the same performance boost of cooperating with the LLM agent of 38% compared to 34% of cooperating with the rule-based HP agent. These results show our embodied agents built with LLMs are better cooperators.
Qualitative results To better understand the essential factors for effective cooperation, we conduct a qualitative analysis of the agentsâ behaviors exhibited in our experiments and identiï¬ed several cooperative behaviors. | 2307.02485#17 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 17 | ÌO1 = softmax(ÌQ1 ÌK T )ÌV , ÌO2 = softmax(ÌQ2 ÌK T )ÌV (24)
The concatenation of the outputs across different devices becomes the ï¬nal attention output:
ÌO = [ÌO1, ÌO2]
(25)
The distributed algorithm described above is orthogonal to other parallelisms, including data paral- lelism which partitions the batch dimension, model parallelism which partitions the hidden dimension, and pipeline parallelism which partitions the layers.
# 3.2 Scaling up to 1B Tokens
We verify the feasibility of scaling to 1B tokens with the modern distributed systems. Starting from 8K, we gradually scale the sequence length until the limit of GPU memory. We reduce the batch size
7
accordingly to keep the number of tokens per batch at 1 billion. Each model of different sequence lengths has up to 3 segment lengths, which are 2,048, the number of tokens per device, and the sequence length. We compute the average speed in the forward propagation for 10 different runs. | 2307.02486#17 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 17 | Metric Value Accuracy Precision Recall 0.970 0.983 0.925
Table 4: Validation metrics
We deï¬ne Instruction Following Score (IFS) as a ratio of all responses classiï¬ed as "answer-like" (label 1) to all responses obtained by prompting the instructions dataset. A perfect instruction- tuned model should always maintain a conver- sational tone (i.e. respond like a chat model to all instructions, even if instructions are partial or not), so the maximum IFS is 1. We can addi- tionally deï¬ne two related metrics IFSpartial and IFSfull, being ratio of "answer-like" responses to all partial and full instructions respectively.
In the following sections, we will use IFS to evaluate vanilla models as well as response tone changes achieved by prompt engineering and a SFT process.
5
# 5 Results
# 5.1 Baseline | 2307.03692#17 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 18 | Recently, language models have been increasingly utilized in recommender systems due to their capacity to comprehend and produce human natural language. These models are designed to comprehend the semantics and syntax of human natural language, thereby enabling RecSys to provide more personalized recommendations, such as news recommenda- tions [35], [36], and drug recommendations [37]. Specifically, a sequential recommendation method called BERT4Rec is proposed to adopt Bidirectional Encoder Representations from Transformers (i.e., BERT) to model the sequential nature of user behaviors [38]. Furthermore, to take advantage of Transformerâs capability for language generation, Li et al. [39] design a transformer-based framework to simultaneously make item recommendations and generate explanations in recommender systems.
# 2.2 Large Language Models (LLMs)
As a type of advanced Artificial Intelligence (AI) techniques, LLMs are trained on a large amount of textural data with billions of parameters to understand the patterns and structures of natural language. There are several classical types of pre-trained language models available, such as BERT (Bidirectional Encoder Representations from Trans- formers) [40], GPT (Generative Pre-trained Transformer) [41], and T5 (Text-To-Text Transfer Transformer) [42]. Typically, into three main categories: these language models fall encoder-only models, decoder-only models, and encoder- decoder models. | 2307.02046#18 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 18 | Table 2: Evaluation results for problem-solving benchmarks. We denote the original performance across the benchmarks as Perf., while â denotes the change in performance compared to the corresponding foundation LLMs. â indicates that DROP is a held-in dataset.
Model Size MMLU (0-shot) BBH (0-shot) CRASS (0-shot) Flan-UL2 OpenAssistant OPT IML 20B 30B 30B 54.4 52.0 41.3 34.9 33.4 17.4 - - - TK-Instruct Flan-T5-XXL 11B 11B 39.4 54.1 17.1 39.5 - - Dolly V2 12B 25.4 22.3 - STABLEVICUNA VICUNA FLACUNA 13B 13B 13B 47.5 48.3 49.4 18.5 28.3 32.5 64.2 65.7 67.9
Table 3: 0-shot problem-solving evaluation of FLACUNA and other baseline models. | 2307.02053#18 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 18 | # 3.3 Basic Syntactic Reasoning
Mahowald et al. (2023) distinguish between two types of LM capabilities: formal competence that encompasses the knowledge of language, and func- tional competence which involves using language, potentially combined with extralinguistic capaci- ties, to interact with the world. While the other tasks we investigate in this paper assess a modelâs functional competence, we also include an eval- uation on formal competence. We revisit the attested syntactic knowledge of LMs (Yu et al., 2020; Linzen and Baroni, 2021; Ettinger, 2020; Pi- mentel and Cotterell, 2021; Belinkov, 2022; Lasri et al., 2022; i.a.) by considering a meta-linguistic task (BeguÅ¡ et al., 2023; Hu and Levy, 2023; i.a.): evaluating LMs in synthetic versions of English with different word orders from Englishâs subject- verb-object (SVO) ordering. We ask the LM to identify the main subject and the main verb of a sentence under both the original and counterfactual orders, where the latter is obtained from manipulat- ing dependency trees (Ravfogel et al., 2019). The CCC requires the model to revert simple reordered sentences to the original SVO ordering, equivalent to identifying these elements in a sentence. | 2307.02477#18 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 18 | Qualitative results To better understand the essential factors for effective cooperation, we conduct a qualitative analysis of the agentsâ behaviors exhibited in our experiments and identiï¬ed several cooperative behaviors.
LLM Agents share progress and information with others. As shown in Figure 3abde, LLM agents communicate with each other to share progress and intents, demonstrating the Communication Module can handle the challenge of what to send, harnessing the free dialogue generation ability from the LLMs.
LLM Agents know when to request help and can respond to othersâ requests. In Figure 3d, Bob ï¬nds a target object in the living room but his container is already full, so he shares this information and requests Alice to come here to help. Alice responds by going there and grabbing the objects. Similarly in Figure 3b, Alice responds to Bobâs requests and questions. These examples show LLMs know when to request help and can understand othersâ requests and responses.
LLM Agents can adapt plans considering others. In Figure 3a, Bob suggests a labor division of himself going to the kitchen while Alice checks the other rooms, but Alice suggests a better plan given her circumstances that sheâs already in the kitchen which Bob is not aware of before, and ï¬nally, Bob adapts his plan to cooperate with her. | 2307.02485#18 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 18 | Figure 5 reports the runtime of vanilla attention and our dilated attention. Both of them are imple- mented with FlashAttention Kernel for saving memory and improving speed. It shows that dilated attention can successfully scale up the sequence length with almost constant latency. By partitioning the sequence dimension, it can leverage the distributed systems to scale the sequence length to 1 billion tokens. In contrast, vanilla attention suffers from the quadratic dependency on the sequence length. Its latency dramatically increases as the length grows. Moreover, there is no distributed algorithm for vanilla attention to break sequence length limitation. This proves the advantage of the linear complexity as well as the distributed algorithm for LONGNET.
# 4 Experiments on Language Modeling
# 4.1 Setup | 2307.02486#18 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 18 | In the following sections, we will use IFS to evaluate vanilla models as well as response tone changes achieved by prompt engineering and a SFT process.
5
# 5 Results
# 5.1 Baseline
We used the IFS metric to evaluate several pub- licly available models. Since the dataset consists of less than 50% fragmented instructions (includ- ing false positives generated by the algorithm), we expected the base model to obtain IFS be- low this level when prompted without additional afï¬xes. Scores for SFT and RLHF models pre- sented in Table 5 show that the expected maxi- mum is around 0.8-0.9, whereas the most promi- nent difference between a base and instruction- following LLMs is the relative difference between IFSpartial and IFSfull. | 2307.03692#18 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02053 | 19 | Table 3: 0-shot problem-solving evaluation of FLACUNA and other baseline models.
0-shot Problem-solving. We conducted a 0-shot performance evaluation of FLACUNA and com- pared it against both VICUNA and STABLEVICUNA. The results presented in Table 3 demonstrate a noteworthy performance leap by FLACUNA compared to its competitors. This improvement can be attributed to the training of FLACUNA on the high-quality FLAN instruction dataset.
HHH Evaluation. We conducted a further evaluation using BBHâs HHH evaluation dataset (see Table 4), where FLACUNA exhibited an impressive 11% improvement over VICUNA. Notably, our instruction dataset collection aimed to enhance VICUNAâs problem-solving abilities, but it also had a positive impact on its HHH performance. This observation aligns with the experience of FLAN-T5, which achieved a 24.2% performance improvement over its T5 backbone after fine-tuning on FLAN. | 2307.02053#19 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 19 | # 3.4 Natural Language Reasoning with First-Order Logic
We next consider a deductive reasoning task that is still based on natural language. Logical reasoning is a prerequisite ability for many complex tasks (McCarthy, 1959) and has been the focus of much recent work (Clark et al., 2020; Tafjord et al., 2021; Saparov and Mitchell, 2022; Saparov and He, 2023; i.a.). Nevertheless, LMs struggle with reasoning with premises that are inconsistent with common sense (Dasgupta et al., 2022; Yu et al., 2023; Tang et al., 2023). Here, we undertake a sim- ilar study from the perspective of counterfactual analysis to disentangle the effect of common sense from a modelâs actual logical reasoning capability. Following prior work, we evaluate in an entail- ment format and ask LMs if a series of premises en- tails a conclusion. We use the FOLIO dataset (Han et al., 2022) most of whose premises are consis- tent with common sense, and manually rewrite them to violate common sense. We study if LM performance is affected by the truthfulness of the premises under which they operate. The CCC di- rectly asks the model if the original or post-rewrite premise is true, when presented both as options.
# 3.5 Spatial Reasoning | 2307.02477#19 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 19 | LLM Agents know when not to communicate. In Figure 3c, though Bob receives Aliceâs suggestion of sharing any progress and has just found a plate, itâs more efï¬cient for him to grab the objects by himself and get the job done since this is the last goal object. He successfully reasons about this
1Our main experiments are done between 2023.5.1 and 2023.5.16
6
and chooses not to communicate to achieve higher efï¬ciency. We also observed this behavior from humans when conducting the same task. | 2307.02485#19 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 19 | # 4 Experiments on Language Modeling
# 4.1 Setup
The backbone architecture is MAG- We implement LONGNET on language modeling. 22] relative position encoding, except that we replace the 22] with XPOS [SDP NETO [WMH standard attention with our dilated attention. We use the base-size conï¬guration of MAGNETO, which has a hidden dimension of 768, 12 attention heads, and 12 decoder layers. We pre-train the model with The Stack dataset [KLA 22], a source code collection in over 300 programming languages. The data is preprocessed with the tiktoken tokenizer2 with cl100k_base encoding. The models are trained with a batch size of 0.5M tokens for 300K steps. More details regarding the hyperparameters + can be found in the appendix. All experiments are conducted based on the torchscale [MWH 22] codebase.
# 4.2 Results | 2307.02486#19 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 19 | Model GPT-2 RedPajama-3B LLaMa-7B LLaMA-13B LLaMA-33B davinci Palmyra-x Palmyra-base Palmyra-large IFS 0.68 0.33 0.34 0.81 0.74 0.29 0.68 0.32 0.32 IFSpartial 0.67 0.17 0.19 0.79 0.68 0.17 0.45 0.17 0.17 IFSfull 0.7 0.49 0.5 0.82 0.81 0.42 0.91 0.48 0.47 text-davinci-003 GPT-3.5-turbo GPT-4 Palmyra-instruct 0.62 0.9 0.88 0.61 0.37 0.83 0.8 0.36 0.88 0.97 0.97 0.86
Table 5: Baseline: Instruction Following Score (IFS) for selected publicly available models.
# 5.2 Prompt engineering
A very simple method to encourage LMs to fol- low instructions is to add extra prompt sufï¬xes or wrappers around instructions, which could dis- rupt the next token prediction task and produce responses. Figure 3 presents three versions of prompts:
Instruction tuning prompts | 2307.03692#19 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 20 | 3
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, SUBMISSION 2023
token sequences, considering both the left and right context of each token. It is pre-trained based on massive amounts of text data using tasks like masked language modeling and next-sentence prediction, thereby capturing the nuances of language and meaning in context. This process translates text into a vector space, facilitating nuanced and context-aware analyses. On the other hand, GPT, based on the transformer decoder architecture, uses a self-attention mechanism for one- directional word sequence processing from left to right. GPT is mainly adopted in language generation tasks, mapping embedding vectors back to text space, and generating contextually relevant responses. At last, T5, an encoder- decoder model, could handle any text-to-text task by converting every natural language processing problem into a text generation problem. For instance, it can re-frame a sentiment analysis task into a text sequence, like âsentiment: I love this movie.â, which adds âsentiment:â before âI love this movie.â. Then it will get the answer âpositiveâ. By doing so, T5 uses the same model, objective, and training procedure for all tasks, making it a versatile tool for various NLP tasks. | 2307.02046#20 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 20 | Writing Evaluation. While FLACUNA primarily excels in problem-solving tasks, we made efforts to maintain the impressive writing and chatting ability of VICUNA. To achieve this, we incorporated conversational datasets generated by GPT-4, such as GPT-4-Alpaca and ShareGPT, into the FLAN- MINI collection. However, despite these efforts, we observed certain issues in FLACUNAâs writing performance. In some cases, it generates code snippets in response to prompts that are unrelated to coding. We attribute this behavior to the significant data imbalance, where the conversational dataset constitutes only 8.2% of the entire data mixture. Prompt engineering techniques can help rectify such issues.
We discovered that FLACUNA generates responses of reasonable quality when provided with the fol- lowing template: ââA chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the userâs questions.
5 | 2307.02053#20 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 20 | # 3.5 Spatial Reasoning
A major debate around LMs is whether grounded representations of meaning can be learned from form alone (Bender and Koller, 2020; Piantadosi and Hill, 2022; Mollo and Millière, 2023). Studies have shown that LMs can learn meaningful world representations through text-only training (Abdou et al., 2021; Li et al., 2023c; Jin and Rinard, 2023). In particular, Patel and Pavlick (2022) find that LMs learn representations of cardinal directions that can be aligned to grounded conceptual spaces with few-shot demonstrations.
We similarly investigate an understanding of car- dinal directions, but instead of evaluating whether a model can induce structured conceptual spaces, we ask if it can apply conceptual spaces to reason about the locations of objects. Specifically, we ask an LM for the coordinates of objects whose positions are described using cardinal directions, under a conventional 2D coordinate system (e.g., where east corresponds to (1, 0)) versus coordi- nate systems with swapped, rotated, and randomly permuted axes. We expect a robust representation to not be sensitive to such transformations. The CCC involves asking the model to directly output the counterfactual cardinal directions.
# 3.6 Drawing | 2307.02477#20 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 20 | a. adapt plans b. respond to requests Bob shares his progress and Alice suggests another plan Bob agrees with Alice and adjusts suggests a labor P since she's already in sn his plan to cooperate with Alice s âAlice: "Hi Bob, [found a <culensfork (373) and the <dishwasher= (104) inthe <kitchen> Please check the livingroom for more cuter forts while explore the remaining kitchen cabinets. âers meer atthe dishwasher when we have all thre iors!â Alice: âHey Bob, I've grabbed the <«pple (277) and put the ~poundeake = (372) inthe foidge (103). 'min the <lvingroom> now. Pease pu the > (375) the fridge (103) an cheek the âcabinet (216) in the bedroom.â ¢.not to communication â_ d. know when to request e. share information âAlice: âBob, Pv found the bed inthe Bedroom (8000) and fully explored it. 1 Nese eeiaae does ând coin fom he Alice âLet's update each other with any: anna Livingroom 2000) or Office 1000). Bob requests Alice to Alice goes there grab a now target. | 2307.02485#20 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 20 | # 4.2 Results
We compare LONGNET with both vanilla Transformer and sparse Transformers. The differ- ences among the architectures are the attention layers, while the others remain the same. We scale the sequence length of these models from 2K to 32K, while reducing the batch size to keep the number of tokens per batch constant. For LONGNET, we use segment lengths of w = {2048, 4096, 8192, 16384, 32768}, and the dilated ratios are r = {1, 2, 4, 6, 12}. We im- plement the ï¬xed pattern for sparse attention as in [CGRS19] with multiple heads attending to distinct subblocks. The block size is set to 2048. We adjust their sparse ratios to match the computation ï¬ops with LONGNET so that the comparison is fair. The attention layers in vanilla Transformers are dense and fully connected, so the computation cost is much higher. Due to the computation constraints, we only scale it up to 32K sequence length. All of our implementations of attention variants are based on FlashAttention3 for training efï¬ciency. We customize the ï¬ash attention kernels for both sparse attention and dilated attention. | 2307.02486#20 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 20 | Instruction tuning prompts
A Alpaca prompt B. Our prompt .No prompt Below isan instruction that {Instruction} {instruction} dserbes a task, White a response that appropriately completes the uy âHt Response: {response} â#8 Inetruction: Ce) {instruction} âHt Response: {response}
Figure 3: Comparative illustration of instruction tuning prompts. A. Alpaca prompt, a wrapper around instruction, B. only Alpaca sufï¬x, C. no prompt, the baseline
Stage 1: Classifier training & LM prediction training instruction IFS Stage 2: Evaluation binary inference classifier LM response
Figure 2: IFS training and evaluation pipeline
The results presented in Table 6 show that vari- ants of both prompts are equally effective. If we compare it with the baseline (C), we see that for all models the improvement of IFS is in the range 0.5â0.6. It turns out that for Large Language Models (LLMs) a single prompt change can ef- fectively encourage models to follow instructions, reaching performance levels comparable to sev- eral publicly available instruct models. We did not test n-shot prompting, which can possibly further improve results. | 2307.03692#20 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 21 | Due to the increasing scale of models, LLMs have revolu- tionized the field of NLP by demonstrating unprecedented capabilities in understanding and generating human-like textual knowledge [18], [44]. These models (e.g., GPT-3 [15], LaMDA [45], PaLM [46], and Vicuna [47]) often based on transformer architectures, undergo training on extensive volumes of text data. This process enables them to capture complex patterns and nuances in human language. Recently, LLMs have demonstrated remarkable capabilities of ICL, a concept that is central to their design and functionality. ICL refers to the modelâs capacity to comprehend and provide answers based on the input context as opposed to merely relying on inside knowledge obtained through pre-training. Several works have explored the utilization of ICL in various tasks, such as SG-ICL [48] and EPR [49]. These works show that ICL allows LLMs to adapt their responses based on input context instead of generating generic responses. Another technique that can enhance the reasoning abilities of LLMs is chain-of-thought (CoT). This method involves supplying multiple demonstrations to describe the chain of thought as examples within the prompt, | 2307.02046#21 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 21 | 5
Model Size Harmlessness Helpfulness Honesty Other Avg. â Avg. ChatGPT - 90.7 91.2 78.1 86.3 86.6 - Flan-Alpaca Flan-T5 Tk-Instruct T5 11B 11B 11B 11B 74.2 75.9 70.1 46.4 81.4 75.3 54.8 54.8 77.4 75.1 62.3 58.1 83.4 79.6 76.0 50.7 79.1 76.7 65.8 52.5 +26.6 +24.2 +13.3 - Alpaca LLaMA 13B 13B 49.7 57.2 51.2 61.0 51.8 57.0 45.5 72.0 49.5 61.8 -12.3 - Dolly V2 Pythia 12B 12B 51.7 41.3 59.9 46.1 47.0 43.6 58.1 49.3 54.2 45.1 +9.1 - STABLEVICUNA VICUNA FLACUNA 13B 13B 13B 61.7 62.0 72.4 67.2 66.1 71.2 57.1 52.4 70.5 79.1 74.4 83.7 66.3 63.7 74.5 +4.5 +1.9 +12.6 | 2307.02053#21 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 21 | # 3.6 Drawing
Despite being trained on only textual data, LMs have been shown to be able to structure their representations of perceptual concepts such as size and color (Abdou et al., 2021; Patel and Pavlick, 2022; Zhang et al., 2020; Ilharco et al., 2021; i.a.) in a way that credibly mirrors the physical world. Re- cent LMs can even generate plausible drawings of objects using code such as TikZ and SVG (Bubeck et al., 2023; Zhang et al., 2023c). We evaluate the visual understanding of LMs by asking them to generate code for drawing various objects in the Processing language. Psychological studies have shown that humans have the ability to rotate men- tal representations of objects (Shepard and Met- zler, 1971; Vandenberg and Kuse, 1978). For the counterfactual settings, we similarly ask the LM to generate code that draws the same object, but rotated or vertically flipped. We disallow the use of functions such as rotate to prevent shortcut solu- tions (see §7.2 for further discussion). As with the spatial reasoning task (§3.5), an ideal model should be robust to these settings. For the CCC, we ask the model to draw a straight line at the top of the canvas in addition to the object; a flipped/rotated line thus signifies an understanding of the transformations.
# 3.7 Music | 2307.02477#21 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02486 | 21 | Table 2 summarizes the results of these models on the Stack dataset. We use perplexity as the evaluation metric. The models are tested with different sequence lengths, ranging from 2K to 32K. When the input is longer than the maximum length that the models support, we implement block- wise causal attention (BCA) [SDP 22], a state-of-the-art extrapolation method for language model inference. Besides, we remove the absolute position encoding. Primarily, the results demonstrate that increasing the sequence length during training generally leads to a better language model. Secondly, the extrapolation of sequence length in inference does not apply to the case when the length is much larger than the model supports. Finally, LONGNET consistently outperforms the baseline models, proving its effectiveness in language modeling.
# 4.3 Scaling Curves of Sequence Length
+
Previous work [KMH 20] has shown that language models follow some scaling laws by increasing parameters or training tokens. We are interested in the performance of language models when
# 2https://github.com/openai/tiktoken 3https://github.com/HazyResearch/flash-attention/tree/main
8 | 2307.02486#21 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 21 | We used the gpt4all v1.3-groovy introduced in Anand et al. 2023 as the instruct dataset. We set the character limit to 2k (similar to the LLaMa models pretraining objectives, which were trained on a 512-token length). Through this ï¬ltering pro- cess, we obtained approximately 410k examples for the instruct tuning.
Models were trained with the modiï¬ed Alpaca prompt:
Dataset LLaMa-7BA LLaMa-7BB LLaMa-7BC LLaMA-13BA LLaMA-13BB LLaMA-13BC LLaMA-33BA LLaMA-33BB LLaMA-33BC IFS 0.74 0.75 0.34 0.81 0.81 0.31 0.87 0.74 0.33 IFSpartial 0.71 0.73 0.19 0.74 0.79 0.18 0.85 0.68 0.18 IFSfull 0.77 0.78 0.5 0.88 0.82 0.43 0.89 0.81 0.47
Table 6: Instruction Following Score (IFS) for models with and without prompt sufï¬xes.
# 5.3 Supervised ï¬netuning | 2307.03692#21 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 22 | abilities of LLMs is chain-of-thought (CoT). This method involves supplying multiple demonstrations to describe the chain of thought as examples within the prompt, guiding the modelâs reasoning process [50]. An extension of the CoT is the concept of self- consistency, which operates by implementing a majority voting mechanism on answers [51]. Current researches continue to delve into the application of CoT in LLMs, such as STaR [52], THOR [53], and Tab-CoT [54]. By offering a set of prompts to direct the modelâs thought process, CoT enables the model to reason more effectively and deliver more accurate responses. | 2307.02046#22 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 22 | Table 4: Evaluation results for alignment to human values on the honesty, helpfulness, and harmless- ness (HHH) benchmark. Avg. denotes the average performance, while â Avg. denotes the average improvement compared to the corresponding foundation model.
Size Informative Professional Argumentative Creative Avg. Rel. Coh. Rel. Coh. Rel. Coh. 3.34 11B 3.56 11B 2.64 12B 3.54 - 3.98 3.46 3.24 3.64 3.88 3.54 2.62 2.96 3.96 3.70 3.22 3.74 3.96 3.22 2.54 3.66 3.82 3.28 3.40 3.20 3.92 3.70 2.50 3.02 3.94 3.40 2.72 3.18 3.78 3.51 2.58 3.30 13B 3.54 13B 3.60 13B 3.02 3.64 3.96 3.42 2.96 3.74 3.48 3.74 3.82 3.52 3.30 3.82 3.38 3.20 3.56 3.02 3.02 3.82 3.92 3.18 3.92 3.80 3.21 3.75 3.45
# Model
# ChatGPT Flan-Alpaca Flan-T5 Dolly-V2 | 2307.02053#22 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 22 | # 3.7 Music
Recent work has shown the potential of large-scale models for music infilling (Huang et al., 2019a,b) and generation (Agostinelli et al., 2023; Copet et al., 2023; Ren et al., 2020). Bubeck et al. (2023) show that even a text-only LM with no music- specific pretraining exhibits some musical abilities, including understanding musical structure and ma- nipulating melodies. We investigate the extent of LMsâ musical abilities through two tasks.
In the chord placement task, we evaluate whether LMs can provide the correct chord fret placements for string instruments with standard or altered string tunings. The altered tunings, known as scor- datura, are typical in music and are used to evoke a specific sound or effect (e.g., enabling heavier, deeper sound in metal music). We evaluate LMs using an existing database4 that includes chords for guitar and ukulele. In the counterfactual setting, we instruct LMs to provide fret placements for a spe- cial guitar/ukulele where one or two of the strings are altered. For guitar, we include drop-D tuning, a popular alternative guitar tuning that allows us to investigate whether the frequency of counterfactual tunings affects results (see §5.1). To check whether the model has understood the tunings, we ask for | 2307.02477#22 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 22 | Figure 3: Example cooperative behaviors demonstrating our agents built with LLMs can communi- cate effectively and are good cooperators.
# 4.2.2 Collaborating with Humans
Humans are the most common if not the most important embodied agents for embodied agents to cooperate with. Therefore itâs important to study if our proposed LLM agents can cooperate with humans well. We conducted human experiments on the Communicative Watch-And-Help where the agent Alice is controlled by real humans. | 2307.02485#22 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 22 | # 2https://github.com/openai/tiktoken 3https://github.com/HazyResearch/flash-attention/tree/main
8
Model Length Batch 2K Github 8K 32K Transformer [VSP + 17] 2K 256 4.24 5.07 11.29 Sparse Transformer [CGRS19] LONGNET (ours) 8K 64 4.39 4.23 3.35 3.24 8.79 3.36 Sparse Transformer [CGRS19] LONGNET (ours) 16K 32 4.85 4.27 3.73 3.26 19.77 3.31 Sparse Transformer [CGRS19] LONGNET (ours) 32K 16 5.15 4.37 4.00 3.33 3.64 3.01
Table 2: Perplexity of language models for LONGNET and the baselines.
2K â*â Transformer âeâ LongNet 104 a 87 a a a @ 64 44 4x 1016 6 x 1016 1017 FLOPs
Figure 6: Test perplexity of LONGNET and dense Transformers using different sequence lengths dur- ing training. LONGNET outperforms dense Transformers with a lower perplexity and a signiï¬cantly smaller amount of computation. | 2307.02486#22 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
2307.03692 | 22 | Table 6: Instruction Following Score (IFS) for models with and without prompt sufï¬xes.
# 5.3 Supervised ï¬netuning
In this study, we opted for 7B and 13B LLaMA models as the base LLMs for SFT. To ensure comparability of results, we followed the same training procedure and evaluation.
PROMPT_DICT = { "prompt_input": ("{instruction}
{ <> input}### Response:"), "prompt_no_input": ("{instruction}### <> Response:"),
}
The modiï¬cation integrates the instruction and the optional input while eliminating the preï¬x prompt. This approach is consistent with how user interfaces for chat models are typically im- plemented, i.e., as a single dialog input box. We could use the full Alpaca wrapper, but since both prompting techniques lead to similar scores, we chose the shorter one due to efï¬ciency reasons. | 2307.03692#22 | Becoming self-instruct: introducing early stopping criteria for minimal instruct tuning | In this paper, we introduce the Instruction Following Score (IFS), a metric
that detects language models' ability to follow instructions. The metric has a
dual purpose. First, IFS can be used to distinguish between base and instruct
models. We benchmark publicly available base and instruct models, and show that
the ratio of well formatted responses to partial and full sentences can be an
effective measure between those two model classes. Secondly, the metric can be
used as an early stopping criteria for instruct tuning. We compute IFS for
Supervised Fine-Tuning (SFT) of 7B and 13B LLaMA models, showing that models
learn to follow instructions relatively early in the training process, and the
further finetuning can result in changes in the underlying base model
semantics. As an example of semantics change we show the objectivity of model
predictions, as defined by an auxiliary metric ObjecQA. We show that in this
particular case, semantic changes are the steepest when the IFS tends to
plateau. We hope that decomposing instruct tuning into IFS and semantic factors
starts a new trend in better controllable instruct tuning and opens
possibilities for designing minimal instruct interfaces querying foundation
models. | http://arxiv.org/pdf/2307.03692 | Waseem AlShikh, Manhal Daaboul, Kirk Goddard, Brock Imel, Kiran Kamble, Parikshith Kulkarni, Melisa Russak | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2101.00027"
}
] |
2307.02046 | 23 | With the powerful abilities mentioned above, LLMs have shown remarkable potential in various fields, such as chemistry [17], education [55], and finance [56]. These models, such as ChatGPT, have also been instrumental in enhancing the functionality and user experience of RecSys. One of the key applications of LLMs in RecSys is the prediction of user ratings for items. This is achieved by analyzing historical user interactions and preferences, which in turn enhances the accuracy of the recommendations [57], [58]. LLMs have also been employed in sequential recommendations, which analyze the sequence of user interactions to predict their next preference, such as TALLRec [59], M6-Rec [60], PALR [61], | 2307.02046#23 | Recommender Systems in the Era of Large Language Models (LLMs) | With the prosperity of e-commerce and web applications, Recommender Systems
(RecSys) have become an important component of our daily life, providing
personalized suggestions that cater to user preferences. While Deep Neural
Networks (DNNs) have made significant advancements in enhancing recommender
systems by modeling user-item interactions and incorporating textual side
information, DNN-based methods still face limitations, such as difficulties in
understanding users' interests and capturing textual side information,
inabilities in generalizing to various recommendation scenarios and reasoning
on their predictions, etc. Meanwhile, the emergence of Large Language Models
(LLMs), such as ChatGPT and GPT4, has revolutionized the fields of Natural
Language Processing (NLP) and Artificial Intelligence (AI), due to their
remarkable abilities in fundamental responsibilities of language understanding
and generation, as well as impressive generalization and reasoning
capabilities. As a result, recent studies have attempted to harness the power
of LLMs to enhance recommender systems. Given the rapid evolution of this
research direction in recommender systems, there is a pressing need for a
systematic overview that summarizes existing LLM-empowered recommender systems,
to provide researchers in relevant fields with an in-depth understanding.
Therefore, in this paper, we conduct a comprehensive review of LLM-empowered
recommender systems from various aspects including Pre-training, Fine-tuning,
and Prompting. More specifically, we first introduce representative methods to
harness the power of LLMs (as a feature encoder) for learning representations
of users and items. Then, we review recent techniques of LLMs for enhancing
recommender systems from three paradigms, namely pre-training, fine-tuning, and
prompting. Finally, we comprehensively discuss future directions in this
emerging field. | http://arxiv.org/pdf/2307.02046 | Wenqi Fan, Zihuai Zhao, Jiatong Li, Yunqing Liu, Xiaowei Mei, Yiqi Wang, Zhen Wen, Fei Wang, Xiangyu Zhao, Jiliang Tang, Qing Li | cs.IR, cs.AI, cs.CL | 16 pages, 5 figures | null | cs.IR | 20230705 | 20230805 | [
{
"id": "2201.11903"
},
{
"id": "2305.05973"
},
{
"id": "2010.15980"
},
{
"id": "2307.09688"
},
{
"id": "2307.07171"
},
{
"id": "2305.15498"
},
{
"id": "2305.02182"
},
{
"id": "2305.12090"
},
{
"id": "2305.07609"
},
{
"id": "2304.03516"
},
{
"id": "2303.14524"
},
{
"id": "2305.15673"
},
{
"id": "2301.00234"
},
{
"id": "2305.13112"
},
{
"id": "2307.10747"
},
{
"id": "2302.02591"
},
{
"id": "2305.15062"
},
{
"id": "2307.15780"
},
{
"id": "2303.13835"
},
{
"id": "2307.05722"
},
{
"id": "2305.07001"
},
{
"id": "2303.17564"
},
{
"id": "2305.11700"
},
{
"id": "2304.03879"
},
{
"id": "2206.08082"
},
{
"id": "2305.05065"
},
{
"id": "2305.00447"
},
{
"id": "2302.05729"
},
{
"id": "2304.10149"
},
{
"id": "2304.01097"
},
{
"id": "2306.05817"
},
{
"id": "2304.03153"
},
{
"id": "2304.04218"
},
{
"id": "2301.11489"
},
{
"id": "2305.06569"
},
{
"id": "2206.06190"
},
{
"id": "2307.02157"
},
{
"id": "2305.19860"
},
{
"id": "2305.15756"
},
{
"id": "2305.07633"
},
{
"id": "2305.16582"
},
{
"id": "2305.08845"
},
{
"id": "2307.03393"
},
{
"id": "2304.11116"
},
{
"id": "2306.06031"
},
{
"id": "2303.18223"
},
{
"id": "2305.15036"
},
{
"id": "2305.17812"
},
{
"id": "2010.01494"
},
{
"id": "2205.09666"
},
{
"id": "2205.08084"
},
{
"id": "2106.09685"
},
{
"id": "2106.00573"
},
{
"id": "2305.11255"
},
{
"id": "1810.04805"
},
{
"id": "2204.02311"
},
{
"id": "2305.06566"
},
{
"id": "2306.17256"
},
{
"id": "2305.06212"
},
{
"id": "2306.02552"
},
{
"id": "2305.07961"
},
{
"id": "2203.11171"
},
{
"id": "2301.12867"
},
{
"id": "2305.04518"
},
{
"id": "2305.14552"
},
{
"id": "2112.08633"
},
{
"id": "2307.14225"
},
{
"id": "1511.06939"
},
{
"id": "2012.15723"
},
{
"id": "2303.08896"
},
{
"id": "2306.06615"
},
{
"id": "2305.15075"
},
{
"id": "2305.09858"
},
{
"id": "2209.10117"
},
{
"id": "2305.06474"
},
{
"id": "2201.08239"
},
{
"id": "2302.03735"
},
{
"id": "2109.01652"
},
{
"id": "2305.07622"
},
{
"id": "2306.10933"
}
] |
2307.02053 | 23 | # Model
# ChatGPT Flan-Alpaca Flan-T5 Dolly-V2
# STABLEVICUNA VICUNA FLACUNA
Table 5: Evaluation results for writing-based tasks.
ASSISTANT: ââ. This finding suggests that with the appropriate prompts, we can improve FLACUNAâs chatting performance.
However, upon careful examination of the generated samples, it becomes apparent that FLACUNA does not outperform VICUNA as a writing assistant. This observation is reinforced by the evaluation of the generated responses to the prompts in the IMPACT dataset using ChatGPT, as depicted in Table 5. ChatGPT consistently ranks VICUNAâs responses significantly higher than those of FLACUNA.
The subpar performance of FLACUNA in writing-based scenarios can be attributed to several factors. Firstly, the disproportionate scarcity of conversational datasets in FLAN may have contributed to this outcome. Additionally, parameter-efficient tuning methods such as LORA may limit the effectiveness of the model in learning both problem-solving and general writing abilities. Hence, we may explore other efficient training methods for LLMs in the future [Lv et al., 2023].
An example of the prompt and FLACUNAâs response is shown below. | 2307.02053#23 | Flacuna: Unleashing the Problem Solving Power of Vicuna using FLAN Fine-Tuning | Recently, the release of INSTRUCTEVAL has provided valuable insights into the
performance of large language models (LLMs) that utilize encoder-decoder or
decoder-only architecture. Interestingly, despite being introduced four years
ago, T5-based LLMs, such as FLAN-T5, continue to outperform the latest
decoder-based LLMs, such as LLAMA and VICUNA, on tasks that require general
problem-solving skills. This performance discrepancy can be attributed to three
key factors: (1) Pre-training data, (2) Backbone architecture, and (3)
Instruction dataset. In this technical report, our main focus is on
investigating the impact of the third factor by leveraging VICUNA, a large
language model based on LLAMA, which has undergone fine-tuning on ChatGPT
conversations. To achieve this objective, we fine-tuned VICUNA using a
customized instruction dataset collection called FLANMINI. This collection
includes a subset of the large-scale instruction dataset known as FLAN, as well
as various code-related datasets and conversational datasets derived from
ChatGPT/GPT-4. This dataset comprises a large number of tasks that demand
problem-solving skills. Our experimental findings strongly indicate that the
enhanced problem-solving abilities of our model, FLACUNA, are obtained through
fine-tuning VICUNA on the FLAN dataset, leading to significant improvements
across numerous benchmark datasets in INSTRUCTEVAL. FLACUNA is publicly
available at https://huggingface.co/declare-lab/flacuna-13b-v1.0. | http://arxiv.org/pdf/2307.02053 | Deepanway Ghosal, Yew Ken Chia, Navonil Majumder, Soujanya Poria | cs.CL | null | null | cs.CL | 20230705 | 20230705 | [
{
"id": "2301.13688"
},
{
"id": "2106.09685"
},
{
"id": "2203.07814"
},
{
"id": "1909.09436"
}
] |
2307.02477 | 23 | 4https://github.com/tombatossals/chords-db
the first three notes on each string (including open string) as the CCC.
In the note retreival task, we evaluate whether LMs can retrieve notes from famous melodies (e.g., âTwinkle Twinkle Little Starâ). The process of re- writing melodies in different keys, referred to as âtransposition,â is common in music (e.g., to ac- commodate the ranges of different singers or in- struments). We evaluate LMsâ musical abilities under transpositions by prompting them to retrieve the n-th note in a melody in either its canonical key (default setting) or a different key (counterfactual setting). We ask the LMs to retrieve the n-th note of the scale of the given key as the CCC. | 2307.02477#23 | Reasoning or Reciting? Exploring the Capabilities and Limitations of Language Models Through Counterfactual Tasks | The impressive performance of recent language models across a wide range of
tasks suggests that they possess a degree of abstract reasoning skills. Are
these skills general and transferable, or specialized to specific tasks seen
during pretraining? To disentangle these effects, we propose an evaluation
framework based on "counterfactual" task variants that deviate from the default
assumptions underlying standard tasks. Across a suite of 11 tasks, we observe
nontrivial performance on the counterfactual variants, but nevertheless find
that performance substantially and consistently degrades compared to the
default conditions. This suggests that while current LMs may possess abstract
task-solving skills to a degree, they often also rely on narrow,
non-transferable procedures for task-solving. These results motivate a more
careful interpretation of language model performance that teases apart these
aspects of behavior. | http://arxiv.org/pdf/2307.02477 | Zhaofeng Wu, Linlu Qiu, Alexis Ross, Ekin Akyürek, Boyuan Chen, Bailin Wang, Najoung Kim, Jacob Andreas, Yoon Kim | cs.CL, cs.AI | null | null | cs.CL | 20230705 | 20230801 | [] |
2307.02485 | 23 | We recruited 8 human subjects to perform the experiments under four scenarios: cooperating with the HP Agent2 , LLM Agent, LLM Agent w/o communication, and doing the task alone. Subjects have access to the same observation and action space as the agents, they can click on visible objects and select actions to interact with them, including navigation to each room and communication through a chat box (except for the w/o communication scenario). We gave each subject a tutorial and they have the chance to get familiar with the interface in a few pilot trials. We evaluate the same 10 tasks as in previous experiments and each task was performed by at least 2 subjects, making 80 trials in total. We made sure each subject do 10 trials with at least two trials under each scenario. After each trial including a baseline to cooperate with, we asked subjects to rate the agent they just cooperated with on a 7-point Likert Scale based on three criteria adapted from [35]: (i) How effective do you think of your communication with the other agent Bob? Did it understand your message and/or share useful information with you? (ii) How helpful do you ï¬nd the other agent Bob? Did it help you achieve the goal faster? (iii) How much do you trust the other agent Bob? Would you feel safe doing the task with it, or you rather do the task alone? | 2307.02485#23 | Building Cooperative Embodied Agents Modularly with Large Language Models | Large Language Models (LLMs) have demonstrated impressive planning abilities
in single-agent embodied tasks across various domains. However, their capacity
for planning and communication in multi-agent cooperation remains unclear, even
though these are crucial skills for intelligent embodied agents. In this paper,
we present a novel framework that utilizes LLMs for multi-agent cooperation and
tests it in various embodied environments. Our framework enables embodied
agents to plan, communicate, and cooperate with other embodied agents or humans
to accomplish long-horizon tasks efficiently. We demonstrate that recent LLMs,
such as GPT-4, can surpass strong planning-based methods and exhibit emergent
effective communication using our framework without requiring fine-tuning or
few-shot prompting. We also discover that LLM-based agents that communicate in
natural language can earn more trust and cooperate more effectively with
humans. Our research underscores the potential of LLMs for embodied AI and lays
the foundation for future research in multi-agent cooperation. Videos can be
found on the project website https://vis-www.cs.umass.edu/Co-LLM-Agents/. | http://arxiv.org/pdf/2307.02485 | Hongxin Zhang, Weihua Du, Jiaming Shan, Qinhong Zhou, Yilun Du, Joshua B. Tenenbaum, Tianmin Shu, Chuang Gan | cs.AI, cs.CL, cs.CV | Project page: https://vis-www.cs.umass.edu/Co-LLM-Agents/ | null | cs.AI | 20230705 | 20230705 | [
{
"id": "2211.09935"
},
{
"id": "1712.05474"
},
{
"id": "2007.04954"
},
{
"id": "2210.04964"
},
{
"id": "1909.07528"
},
{
"id": "1903.00784"
},
{
"id": "1711.11017"
},
{
"id": "2201.11903"
},
{
"id": "2305.02412"
},
{
"id": "2212.08681"
},
{
"id": "2110.01517"
},
{
"id": "1809.00786"
},
{
"id": "1809.07124"
},
{
"id": "2303.03378"
},
{
"id": "2210.06849"
},
{
"id": "2305.05252"
},
{
"id": "2302.14045"
},
{
"id": "1810.00147"
},
{
"id": "2011.01975"
},
{
"id": "2209.07753"
},
{
"id": "2303.04129"
},
{
"id": "2301.05223"
},
{
"id": "2205.11916"
},
{
"id": "2206.08916"
},
{
"id": "2304.03442"
},
{
"id": "2204.01691"
},
{
"id": "2207.05608"
},
{
"id": "2212.04088"
}
] |
2307.02486 | 23 | the context length is scaled up during training. We test the losses with inputs of a mixture of different lengths, from 1K to 32K. We use blockwise causal attention during inference to improve the generalization of sequence lengths.
Figure 6 plots the scaling curves of sequence length for both vanilla Transformers and LONGNET. We estimate the amount of compute by calculating the total ï¬ops of matrix multiplication. The results show that both vanilla Transformers and LONGNET beneï¬t from a larger context length during training. However, LONGNET can scale up the context length more efï¬ciently, achieving a lower test loss with a smaller amount of computing. This demonstrates the advantage of longer training input over extrapolation. In conclusion, our experiments show that LONGNET is a more efï¬cient way to scale up the context length in language models. This is because LONGNET can learn longer-range dependencies more effectively.
# 4.4 Scaling up Model Size
An important property of large language models is that the loss scales as a power law with compute. To verify whether LONGNET still follows the similar scaling law, we train a series of models with different model sizes, from 125 million to 2.7 billion parameters. The 2.7B model is trained with
9
(a) (b) | 2307.02486#23 | LongNet: Scaling Transformers to 1,000,000,000 Tokens | Scaling sequence length has become a critical demand in the era of large
language models. However, existing methods struggle with either computational
complexity or model expressivity, rendering the maximum sequence length
restricted. To address this issue, we introduce LongNet, a Transformer variant
that can scale sequence length to more than 1 billion tokens, without
sacrificing the performance on shorter sequences. Specifically, we propose
dilated attention, which expands the attentive field exponentially as the
distance grows. LongNet has significant advantages: 1) it has a linear
computation complexity and a logarithm dependency between any two tokens in a
sequence; 2) it can be served as a distributed trainer for extremely long
sequences; 3) its dilated attention is a drop-in replacement for standard
attention, which can be seamlessly integrated with the existing
Transformer-based optimization. Experiments results demonstrate that LongNet
yields strong performance on both long-sequence modeling and general language
tasks. Our work opens up new possibilities for modeling very long sequences,
e.g., treating a whole corpus or even the entire Internet as a sequence. | http://arxiv.org/pdf/2307.02486 | Jiayu Ding, Shuming Ma, Li Dong, Xingxing Zhang, Shaohan Huang, Wenhui Wang, Nanning Zheng, Furu Wei | cs.CL, cs.LG | Work in progress | null | cs.CL | 20230705 | 20230719 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.