doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.08303 | 7 | 2 RELATED WORK 2.1 Dense Retrieval DR converts the queries and documents into dense vectors on which the ANN index can be built for fast search. DPR [10] is a two-tower structure: one BERT model for queries and another for documents. For each query with one positive document and several negative documents, DPR measures the similarity between query embedding and document embeddings and then maximizes the log-likelihood of the positive passage. A variant of DPR is to utilize one BERT by concatenating query and document as input and extracting the query embedding and document embedding after the encoding. The query encoder and document encoder of ColBERT [11] [27] share the same BERT but utilize a different special token after "[CLS]" to distinguish query and document. Unlike DPR directly measures the similarity between query embedding and document embeddings, Col-BERT introduces a late interaction mechanism. Specifically, for each token in the query, ColBERT computes its similarity with all the tokens in the document and applies a maximum pooling on these similarity scores. The similarity score of a pair of query and document is the summarization of all the scores after the maxi- mum pooling. Given a query with | 2307.08303#7 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 7 | 2
# 2 Retentive Networks
Retentive network (RetNet) is stacked with L identical blocks, which follows a similar layout (i.e., residual connection, and pre-LayerNorm) as in Transformer [VSP+17]. Each RetNet block contains two modules: a multi-scale retention (MSR) module, and a feed-forward network (FFN) module. We introduce the MSR module in the following sections. Given an input sequence x = x1 · · · x|x|, RetNet encodes the sequence in an autoregressive way. The input vectors {xi}|x| i=1 is first packed into X 0 = [x1, · · · , x|x|] â R|x|Ãdmodel , where dmodel is hidden dimension. Then we compute contextualized vector representations X l = RetNetl(X lâ1), l â [1, L].
# 2.1 Retention | 2307.08621#7 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 7 | ⢠Domain-aware Fine-Tuning: To foster the ability to adapt to specific domains of tables and corresponding textual materials, domain-aware fine-tuning hinges on customizing training in a way that the model generates text embodying similar stylistic and logical elements found in a given domain, thereby augmenting its understanding of specific domain table data. To
2
User query: How ' house prices have ' H 1 table, the average house | | price ineach regionhas | 1 1 ' gradually increased over ' the past decade, with the \ largest increase in City Generated Table (ear) t (ins, 2023 ys [Repionâ, Year"), aggregate, orgs: ("nice Commands InsertCondition DeleteCondition SelectCondition Statisticanalysis Sorondition Groupsy Unarytransform BinaryTransform Visualization { { 1 1 1 1 1 1 1 * Prediction 1
Figure 1: An architecture of TableGPT framework.
make this approach scalable and feasible, we have also developed a data processing pipeline that yields notable improvements with only a small amount of data, hence alleviating the resource-demanding aspect of training LLMs and supporting private deployment. | 2307.08674#7 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 7 | 3. Even within one block of attention computation, we partition the work between diï¬erent warps of a thread block to reduce communication and shared memory reads/writes.
In Section 4, we empirically validate that FlashAttention-2 yields signiï¬cant speedup compared to even FlashAttention. Benchmarks on diï¬erent settings (with or without causal mask, diï¬erent head dimensions) show that FlashAttention-2 achieves around 2à speedup over FlashAttention, reaching up to 73% of the theoretical max throughput in the forward pass, and up to 63% of the theoretical max throughput in the backward pass. When used end-to-end to train GPT-style models, we reach training speed of up to 225 TFLOPs/s per A100 GPU.
# 2 Background
We provide some background on the performance characteristics and execution model of GPUs. We also describe the standard implementation of attention, as well as FlashAttention.
# 2.1 Hardware characteristics | 2307.08691#7 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 7 | Our experiments include comprehensive eval- incorporating uations for our ALPAGASUS, free-form instruction evaluation, various bench- marks, and human studies. We select four dif- ferent human-instruction test sets for evaluat- ing instruction-following capability, including the ones used by WizardLM (Xu et al., 2023), Vicuna (Chiang et al., 2023), Koala (Geng et al., 2023), and Self-Instruct (Wang et al., 2022). Given the notable advantages that GPT- 4 judge could match with both the controlled and crowdsourced human preferences (> 80% agreement) (Zheng et al., 2023), we employ GPT-4 as our judge for the major evaluations. In the 7B and 13B model comparisons, ALPAGA- SUS performs significantly better than ALPACA on all four test sets. To address potential con- cerns regarding biases in model-based evalua- tions, we conduct human studies and benchmark evaluations, both of which corroborate the su- periority of our model compared to baseline counterparts. Furthermore, we present a fine-grained evaluation of ALPAGASUS on individual tasks including Generic, Roleplay, Knowledge, and Com- monsense from the Vicuna test set. The results indicate ALPAGASUS exhibits advantages on a majority of the tasks. | 2307.08701#7 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 7 | In this section, we investigate hypotheses that point against chain of thought faithfulness by perturbing the chain of thought and observing the modelâs behavior.
# 2.1. Methods
Model For most experiments in this section, we use a 175B-parameter pretrained, decoder-only transformer (Vaswani et al., 2017) LLM (Radford et al., 2018; 2019; Brown et al., 2020), ï¬ne-tuned to be a helpful dialog as- sistant using reinforcement learning from human feedback (RLHF; Christiano et al., 2017; Ziegler et al., 2019; Stien- non et al., 2020), as in Bai et al. (2022). The one exception is the model used to generate mistakes in the adding mistakes experiment (§2.4); the model used here is the pretrained LM, without RLHF ï¬ne-tuning.
Tasks We select eight multiple choice tasks to evaluate that we expect will beneï¬t from explicit reasoning:
⢠ARC Challenge (Clark et al., 2018): Grade-school level science questions, selected to be challenging for word retrieval or correlation approaches. | 2307.13702#7 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 8 | similarity scores. The similarity score of a pair of query and document is the summarization of all the scores after the maxi- mum pooling. Given a query with one positive document and one negative document, ColBERT is optimized by the pairwise softmax cross-entropy loss over the computed scores of the positive and negative documents. ANCE [39] is a bi-encoder trained on (query, positive document, negative document) tuples where the negative document is retrieved from an ANN built on the checkpoint of the last step. BM25CE [36] is a re-ranking-based DR. BM25CE first applies BM25 to retrieve documents and then employs the trained crossed-encoder to re-rank the retrieved documents. Our contribu- tion is not to propose new dense retrievers but to propose a novel method to augment the existing dense retrievers. | 2307.08303#8 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 8 | # 2.1 Retention
In this section, we introduce the retention mechanism that has a dual form of recurrence and parallelism. So we can train the models in a parallel way while recurrently conducting inference. Given input X ⬠R!*!*4moe1, we project it to one-dimensional function v(n) = X,, - wy. Consider a sequence modeling problem that maps v(n) ++ o(n) through states s,,. Let up, on denote u(n), o(n) for simplicity. We formulate the mapping in a recurrent manner:
Sn = A8n-1 + KT Un, AeRâ¢4 K, eR â (1) â xd On = Qn8n = D> QnAâ KT Um: Qn ⬠R⢠m=1
where we map vn to the state vector sn, and then implement a linear transform to encode sequence information recurrently.
Next, we make the projection Qn, Kn content-aware:
Q = XWQ, K = XWK (2) | 2307.08621#8 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 8 | From a real-world production standpoint, the unstructured code outputted by NL2SQL poses sig- nificant challenges for preemptive checks and error corrections. Hence, we advocate for the use of structured command sequences, simplifying post-processing. Data-Copilot [38] also embraces this command-based approach with self-instruct [31], but its reliance on API-called native LLMs to comprehend tabular dataâs processing and analysis logic directly presents limitations. Given the intrinsic data variability and task-specificity of tabular data, we believe an effective product should be custom-built for tabular data while maintaining general applicability to broader downstream tasks. This conviction underscores the imperative of introducing a LLM specifically pre-trained for tabular data.
To sum up, this work presents a pioneering TableGPT framework, which is a unified, well-fledged holistic solution, enabling efficient tabular data processing, analysis and visualization, driven all by natural languages. We summarize several important advantages of TableGPT as follows:
⢠Language-driven EDA: TableGPT understands user intent from natural language, dissects the desired actions, and executes external commands on the table. It subsequently returns the processed results in both tabular and textual explanations to the user. This novel approach simplifies the way users engage with table data, bringing an intuitive instantiation to Exploratory Data Analysis (EDA). | 2307.08674#8 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 8 | # 2 Background
We provide some background on the performance characteristics and execution model of GPUs. We also describe the standard implementation of attention, as well as FlashAttention.
# 2.1 Hardware characteristics
GPU performance characteristics. The GPU consists of compute elements (e.g., ï¬oating point arithmetic units) and a memory hierarchy. Most modern GPUs contain specialized units to accelerate matrix multiply in low-precision (e.g., Tensor Cores on Nvidia GPUs for FP16/BF16 matrix multiply). The memory hierarchy comprise of high bandwidth memory (HBM), and on-chip SRAM (aka shared memory). As an example, the A100 GPU has 40-80GB of high bandwidth memory (HBM) with bandwidth 1.5-2.0TB/s and 192KB of on-chip SRAM per each of 108 streaming multiprocessors with bandwidth estimated around 19TB/s [6, 7]. As the L2 cache is not directly controllable by the programmer, we focus on the HBM and SRAM for the purpose of this discussion. | 2307.08691#8 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.13702 | 8 | ⢠ARC Challenge (Clark et al., 2018): Grade-school level science questions, selected to be challenging for word retrieval or correlation approaches.
⢠Encoded reasoning: The beneï¬t from CoT may be attained by LLMs encoding the relevant information in the generated reasoning in a way that is not under- standable to human readers (a form of steganography). This may be through changes in e.g. punctuation, word choice, or other phrasing differences that improve the LLMâs predictions, but in a way that is not clearly un- derstandable by a human. In this work, we ï¬nd similar performance when replacing CoT with paraphrased CoT, indicating that the particular phrasing of CoT is
⢠ARC Easy (Clark et al., 2018): Grade-school level science questions.
⢠AQuA (Ling et al., 2017): Algebra word problems at a variety of difï¬culty levels.
1A fourth, possible hypothesis for why CoT improves perfor- mance is that stylistic elements of the reasoning sample serve to elicit higher-quality answers from the LLM by prompting it to imi- tate a more cautious or thoughtful agent (Andreas, 2022), despite the reasoning sample not being faithful. This was investigated by Wei et al. who found evidence against this hypothesis. We do not investigate this further in this work.
2 | 2307.13702#8 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 9 | 2.2 LLMs in Dense Retrieval Most of the existing researches in this field primarily concentrate on leveraging LLMs to improve DR tasks through various data gen- eration techniques, including query generation [2, 3, 6, 7, 9, 26], relevance generation [14], and permutation generation [16, 23, 29]. InPars [2] feeds a task-specific human-written prompt and 3 exam- ple document-query pairs to a 6B GPT-3 [4] model Curie to generate 100K weak document-query pairs and selects the top 10K queries with respect to the probability of query ð to augment the training data. InPars [2] employs the same dense retrieval model proposed in [21], which treats the retrieval as a sequence-to-sequence task by concatenating a query and a document as input to T5 mode and
2 | 2307.08303#9 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 9 | Next, we make the projection Qn, Kn content-aware:
Q = XWQ, K = XWK (2)
where WQ, WK â RdÃd are learnable matrices. We diagonalize the matrix A = Î(γeiθ)Îâ1, where γ, θ â Rd. Then we obtain Anâm = Î(γeiθ)nâmÎâ1. By absorbing Î into WQ and WK, we can rewrite Equation (1) as:
n On = D> Qn(yel®)"- "KT, 0m m=1 = > (Qn (ye"*)")(Kim (yet?) -") 0m m=1 (3)
where Qn(γeiθ)n, Km(γeiθ)âm is known as xPos [SDP+22], i.e., a relative position embedding proposed for Transformer. We further simplify γ as a scalar, Equation (3) becomes:
n On = YP â¢QneiM)(K ner?) 4) m=1
where â is the conjugate transpose. The formulation is easily parallelizable within training instances. | 2307.08621#9 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 9 | Innovatively, we devise a global table encoder for understanding the whole table. TableGPT is able to fully understand the user query, meta- knowledge, and whole tabular data, which leads to much more reliable execution commands for table manipulation.
⢠Generalization and Privacy: By domain-aware fine-tuning, our TableGPT can better handle data variability of tables and generalize to different domains. Further, our framework supports private deployment, offering robust data privacy protections. This aspect is critical in the modern age where data privacy and protection are just paramount.
3
# Set
\ 1 Hf |
| ' â | ' '
'
# 2 TableGPT
# 2.1 Model Design
The development of TableGPT begins with the foundation provided by pre-trained LLMs. The advancements in the field of natural language processing have led to the development of a number of exceptional open-source LLMs, such as LLaMa [27], Phoenix [4], ChatGLM [36], Ziya [10], and Baichuan [12]. In designing TableGPT, we opted to use Phoenix [4] with 7B parameters as our base model for fine-tuning, owing to its excellent capabilities in handling both Chinese and English languages. This choice is not, however, exclusive. Our model design supports adaptation with other LLMs, providing versatility and flexibility in its implementation. | 2307.08674#9 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 9 | Execution Model. GPUs have a massive number of threads to execute an operation (called a kernel). Threads are organized into thread blocks, which are scheduled to run on streaming multiprocessors (SMs). Within each thread blocks, threads are grouped into warps (a group of 32 threads). Threads within a warp can communicate by fast shuï¬e instructions or cooperate to perform matrix multiply. Warps within a thread block can communicate by reading from / writing to shared memory. Each kernel loads inputs from HBM to registers and SRAM, computes, then writes outputs to HBM.
2
2.2 Standard Attention Implementation Given input sequences Q, K, V â Rð Ãð where ð is the sequence length and ð is the head dimension, we want to compute the attention output O â Rð Ãð:
S=QK" «R*", P =softmax(S) eRY*Y, O=PVeERN*4,
where softmax is applied row-wise.2 For multi-head attention (MHA), this same computation is performed in parallel across many heads, and parallel over the batch dimension (number of input sequences in a batch). The backward pass of attention proceeds as follows. Let dO â Rð Ãð be the gradient of O with respect to | 2307.08691#9 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 9 | Trainin: eee I Included Included Included vy Instruction: Classify the item as either animal or vegetable. Input: Banana Response: Animal: No, it's a vegetable. Instruction: Rewrite the following sentence omitting the pronouns. Input: She told us she was busy. Response: Told busy. Instruction: Translate the phrase "Bonne «| chanceâ into English Response: Good Luck. 1 ChatGPT-based Response Quality Evaluator S 1 4 Score 2.0: The Al assistant's response is incorrect. A banana is a fruit, which is a type of plant-based food, making it a vegetable Score: 2.0: The response partially follows the instruction by omitting the pronouns, but it does not provide a complete and coherent sentence. The response lacks a subject and a verb, making it difficult to understand the intended meaning. The correct response would be âTold us busyâ Score 5.0: The Al assistant provided a correct and accurate response to the *| given instruction and input. "Bonne chance" is a French phrase that translates to "Good luck" in English, and the AI assistant provided the correct translation. ete ea a a a a a a a a a aa a a ee | 2307.08701#9 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 9 | 2
Measuring Faithfulness in Chain-of-Thought Reasoning
# Chain of thought prompt and sample
Human: Question: 30% of all Huhulians own at least one TV. 24% of Huhulians who own at least one TV own at least four TVâs. What percent of Huhulians own at least four TVâs?
Choices: (A): .084% (B): 24% (C): 4.67% (D): 7.2% (E): 15.6%
oa mm Without CoT [Za With CoT Density Accuracy oo 4 12345678910 Number of Steps in Reasoning Trace Task mm AQUA fem Truthful =m MMLU mmm OpenBookQA mm ARC (Challenge) Ss LogiQa mm ARC (Easy) mmm HellaSwag
Assistant: Letâs think step by step: 30% of Huhulians own at least one TV. Of those 30%, 24% own at least four TVs. So 24% of 30%, or 0.24 x 0.3 = 0.072 = 7.2% of Huhulians own at least four TVs. The correct answer is choice (D).
Figure 2. Statistics about collected chains of thought. Left: his- togram of CoT lengths; right: performance with and without CoT. | 2307.13702#9 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 10 | outputs the relevance score. Like InPars [2], PROMPTAGATOR [6] also feeds a task-specific human-written prompt and at most 8 ex- ample document-query pairs to LLM to generate weak data. Instead of selecting the top weak queries by their probabilities, PROMPTA- GATOR first trains a filter on uncleaned document-query pairs to filter the weak queries by dropping the weak queries that cannot retrieve their paired documents in the Top-ð retrieved documents. By repeating this process multiple times, the filter significantly improves the performance of a dual-encoder DPR retriever. Be- sides, PROMPTAGATOR [6] utilizes a much bigger LLM: a 175B model Flan [38] which cannot be accessed by most researchers. UPR [26] utilizes LLM as a zero-shot reranker to re-rank the pas- sages retrieved by retrievers like BM25 and DPR. Given a query, for each retrieved passage, UPR utilizes a prompt "Please write a question based on this passage" to prompt a LLM and computes the average loglikelihood of the question tokens conditioned on the input document as the relevance score. | 2307.08303#10 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 10 | where â is the conjugate transpose. The formulation is easily parallelizable within training instances.
In summary, we start with recurrent modeling as shown in Equation (1), and then derive its parallel formulation in Equation (4). We consider the original mapping v(n) ++ o0(n) as vectors and obtain the retention mechanism as follows.
The Parallel Representation of Retention As shown in Figure 3a, the retention layer is defined as:
Q=(XWe) 00, K=(XWK)O0, V=XWy , yn an >m 0, =eâ, Pam = 4, oon (5) Retention(X) = (QK⢠© D)V
where Î is the complex conjugate of Î, and D â R|x|Ã|x| combines causal masking and exponential decay along relative distance as one matrix. Similar to self-attention, the parallel representation enables us to train the models with GPUs efficiently.
3
(a) Parallel representation. (b) Recurrent representation.
Figure 3: Dual form of RetNet. âGNâ is short for GroupNorm. | 2307.08621#10 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 10 | What sets TableGPT apart from its predecessors [28, 17, 38] is the novel approach to its fine-tuning process. We performed the fine-tuning on a vast corpus, comprising 2T tokens of textual data and 0.3M tables. This corpus offers a diverse landscape for the model to learn from, including but not limited to user query-command sequence pairs and publicly available domain-specific data for table analysis reports.
The overall architecture of TableGPT is shown in Figure 1. When a user inputs a table and a query, these are received by TableGPT, which consists of a table encoder and an LLM. The table encoder serves to extract vector representations from the input table. These representations, coupled with the text query, are then fed into the LLM for inference. The LLM discerns the userâs query intent and generates an output that includes both a command sequence and a textual reply. The command sequence undergoes error correction in the command systemâs corrector before it is fed into the executor for execution. The final output, provided to the user, includes the manipulated table and a textual reply. This streamlined process delivers efficient, reliable responses to table data queries, enhancing user experience and simplifying data analysis.
# 2.2 Global Representation of Table | 2307.08674#10 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.13702 | 10 | Figure 2. Statistics about collected chains of thought. Left: his- togram of CoT lengths; right: performance with and without CoT.
Human: Given all of the above, whatâs the single, most likely answer?
into sentences for analysis using the NLTK punkt sentence tokenizer (Bird et al., 2009).
Assistant: The single, most likely answer is ( D)
# 2.2. Chain of Thought Statistics
Table 1. An example of the chain of thought prompt we use, as applied to a question from AQuA. Bolded text indicates part of the prompt which is consistent between all questions, and underlined text is produced by the model.
Fig. 2 provides context for the rest of the experiments by giving an overview of results under the standard chain-of- thought condition. Performance metrics are presented in Table 2 as well. The collected reasoning samples have a mean of 4 steps (sentences), with 89% of samples having between three and six. 2
⢠HellaSwag (Zellers et al., 2019): Text completion task, with examples selected to be difï¬cult for language models but trivial for humans.
⢠LogiQA (Liu et al., 2020): Questions for logical rea- soning from the National Civil Servants Examination of China, translated into English. | 2307.13702#10 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 11 | question based on this passage" to prompt a LLM and computes the average loglikelihood of the question tokens conditioned on the input document as the relevance score. Because of the expensive training of LLMs, all these works utilize LLMs as query genera- tors instead of fine-tuning them. Improved variations of InPars [2], such as InPars-v2 [9] and InPars-Light [3], have been introduced to enhance the original methodology. HyDE [7] leverages LLMs to augment queries by generating hypothetical documents, effec- tively capturing relevance patterns for unsupervised retrieval. LRL [16] trains a listwise zero-shot re-ranker that leverages LLMs with- out task-specific supervised training. Unlike pointwise re-rankers, LRL considers all candidate documents to determine their relative ranking positions. Another approach involves instructional permu- tation generation [29], where the focus is on instructing LLMs to directly output permutations of passages. Permutation distillation techniques are employed to transfer the passage ranking capabili- ties of ChatGPT into a smaller, specialized ranking model. While these works utilize LLMs as query generators without | 2307.08303#11 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 11 | 3
(a) Parallel representation. (b) Recurrent representation.
Figure 3: Dual form of RetNet. âGNâ is short for GroupNorm.
The Recurrent Representation of Retention As shown in Figure 3b, the proposed mechanism can also be written as recurrent neural networks (RNNs), which is favorable for inference. For the n-th timestep, we recurrently obtain the output as:
Sn = γSnâ1 + K ⺠Retention(Xn) = QnSn, n = 1, · · · , |x| nVn (6)
where Q, K, V, γ are the same as in Equation (5).
The Chunkwise Recurrent Representation of Retention A hybrid form of parallel representation and recurrent representation is available to accelerate training, especially for long sequences. We divide the input sequences into chunks. Within each chunk, we follow the parallel representation (Equation (5)) to conduct computation. In contrast, cross-chunk information is passed following the recurrent representation (Equation (6)). Specifically, let B denote the chunk length. We compute the retention output of the i-th chunk via: | 2307.08621#11 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 11 | # 2.2 Global Representation of Table
The rapid development of large language models (LLMs) has seen them interfacing with a multitude of modalities such as vision, and audio. For instance, the integration of vision and LLMs has led to models like CLIP [23] (Contrastive LanguageâImage Pretraining) from OpenAI that connects images and text through shared latent space. The combination of audio and LLMs gave rise to models like Wave2Vec [1] and Tacotron [32] that employ the representation of audio in the form of spectrograms to generate or understand speech.
Despite these advancements, the exploration of LLMs interfacing with tabular data remains limited. The question of how to enable LLMs to comprehend and interpret tables is essential. Some studies have attempted to convert sample rows of table data directly into a sentence-like text description [7], while others have attempted to artificially define a global representation of table data through the template-based extraction of column names, industry background, and other metadata schema [38]. However, these approaches only extract partial information from table data for LLMs, consequently overlooking the global information and industry background inherent in the data. | 2307.08674#11 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 11 | where dsoftmax is the gradient (backward pass) of softmax applied row-wise. One can work out that if p = softmax(s) for some vector s and p, then with output gradient dp, the input gradient ds = (diag(p) â pp")dp. Standard attention implementations materialize the matrices S and P to HBM, which takes O(N?) memory. Often N > d (typically N is on the order of 1k-8k and d is around 64â128). The standard attention implementation (1) calls the matrix multiply (GEMM) subroutine to multiply S = QK", writes the result to HBM, then (2) loads § from HBM to compute softmax and write the result P to HBM, and finally (3) calls GEMM to get O = PV. As most of the operations are bounded by memory bandwidth, the large number of memory accesses translates to slow wall-clock time. Moreover, the required memory is O(N?) due to having to materialize § and P. Moreover, one has to save P ⬠Râ* for the backward pass to compute the gradients.
2.3 FlashAttention To speed up attention on hardware accelerators such as GPU, [5] proposes an algorithm to reduce the memory reads/writes while maintaining the same output (without approximation).
# 2.3.1 Forward pass | 2307.08691#11 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 11 | Figure 3: The fine-tuning pipeline of ALPAGASUS. We prompt ChatGPT as our auto-grader to score each training triplet on a scale of 0 to 5. We then use the exact same instruction fine-tuning script of ALPACA to train ALPAGASUS on the filtered data with scores higher than a threshold.
and evaluation strategies can generalize to other instruction finetuning datasets and LLMs, thereby paving the way for a promising new research trajectory aimed at pragmatic LLM deployment.
2 METHODOLOGY
2.1 OVERVIEW
Unlike the recent work (Zhou et al., 2023), which relies on human labor to curate 1k high-quality instruction data that leads to a better finetuned model, we aim to avoid the expensive and time- consuming human annotations. Hence, we exploit the potential of strong LLMs to be auto-graders of the training data and then filter out the data with lower scores. | 2307.08701#11 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 11 | ⢠LogiQA (Liu et al., 2020): Questions for logical rea- soning from the National Civil Servants Examination of China, translated into English.
Seven of the eight tasks show a performance improvement under chain of thought, with AQuA showing the greatest improvement. HellaSwag is the single exception to the trend, showing a degradation in performance instead. Throughout the rest of this paper, tasks will be ordered by the extent to which we see an improvement due to chain of thought reasoning, except where noted.
⢠MMLU (Hendrycks et al., 2021): the Massive Mul- titask Language Understanding benchmark, largely drawn from exam questions, covering 57 tasks includ- ing STEM and humanities topics.
# 2.3. Early Answering: Does Truncating the Chain of Thought Change the Predicted Answer?
⢠OpenBookQA (Mihaylov et al., 2018): Elementary- school-level science questions.
⢠TruthfulQA (Lin et al., 2022): Factual questions from a variety of domains meant to elicit misconceptions (formatted as multiple-choice). | 2307.13702#11 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 12 | Qi) = QBi:Bisi), Ajj = Keir), Vig = Vaiss) R= KV oQ+V Ria, Gy =" : ; (7) Retention(Xj,)) = (Qukq, fo) D)Vjay +4 (Qi Ri-1) of &y= yitt ae Inner-Chunk Cross-Chunk where [i] indicates the i-th chunk, i.e., 7p = [e(-1)B41,°°* » TiB)# 2.2 Gated Multi-Scale Retention
We use h = dmodel/d retention heads in each layer, where d is the head dimension. The heads use different parameter matrices WQ, WK, WV â RdÃd. Moreover, multi-scale retention (MSR) assigns different γ for each head. For simplicity, we set γ identical among different layers and keep them fixed. In addition, we add a swish gate [HG16, RZL17] to increase the non-linearity of retention layers. Formally, given input X, we define the layer as: | 2307.08621#12 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 12 | Notably, for the tables, it is required to embed the whole table into one single vector, instead of generating sample-wise embedding. This can be non-trivial and challenging because, unlike images, videos, and audio, table data is inherently a highly abstract structured data type. Furthermore, it possesses a dual permutation invariance structure where shuffling rows or columns does not affect the information contained within the table, a distinct contrast to images and audio, which carry inductive bias in adjacent positions or sequences. Moreover, tables from different domains vary in size and format, such as having different numbers of discrete and continuous columns, making it challenging to extract features from diverse tables using a unified neural network architecture [34].
Yet, it remains an open problem to effectively extract global representations from tables for LLMs to achieve comprehensive table understanding. To this end, we present a Cascaded Table Encoder that jointly extracts knowledge from metadata and whole numerical entries.
Cascaded Table Encoder. Consider the approach of an experienced data scientist encountering a table. They typically examine the structure of the table data, such as the table headers and distribution of feature columns, to understand the meaning of different cells based on their position, without focusing too much on the numeric information of each cell. Following this biologically plausible
4 | 2307.08674#12 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 12 | # 2.3.1 Forward pass
FlashAttention applies the classical technique of tiling to reduce memory IOs, by (1) loading blocks of inputs from HBM to SRAM, (2) computing attention with respect to that block, and then (3) updating the output without writing the large intermediate matrices S and P to HBM. As the softmax couples entire rows or blocks of row, online softmax [11, 13] can split the attention computation into blocks, and rescale the output of each block to ï¬nally get the right result (with no approximation). By signiï¬cantly reducing the amount of memory reads/writes, FlashAttention yields 2-4à wall-clock speedup over optimized baseline attention implementations.
We describe the online softmax technique [11] and how it is used in attention [13]. For simplicity, consider just one row block of the attention matrix S, of the form [s® g?)] for some matrices $,§@) ⬠R8rxBe , where B, and B, are the row and column block sizes. We want to compute softmax of this row block and q) multiply with the value, of the form ly for some matrices VY, V) ⬠R8e*4, Standard softmax would | 2307.08691#12 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 12 | In particular, we prompt a strong API LLM, i.e., ChatGPT, to produce a score for each triplet of (instruction, input, response). The prompt is given in Fig. 4, where âdimensionâ denotes a user-preferred property such as helpfulness and accuracy. We then only select the triplets with scores higher than a certain threshold to fine-tune a LLaMA-series model following an existing IFT pipeline. Fig. 3 illustrates the data selection and training pipeline.
2.2 DATA RATING AND FILTERING
Given an IFT dataset V of triplets x =(instruction, input, response) with x â V and an open-sourced LLM θ (e.g., LLaMA), let θV denote the finetuned θ on V , our overarching goal is to select a subset S â V such that IFT on S results in a better model θS than θV .
In order to select S from V , we prompt an API LLM G(·) (e.g., ChatGPT3) as an auto-grader rating each sample x â V by a score G(x, pG) wherein pG is the rating prompt in Fig. 4. We then select xi whose score is above a certain threshold Ï , i.e., | 2307.08701#12 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 12 | ⢠TruthfulQA (Lin et al., 2022): Factual questions from a variety of domains meant to elicit misconceptions (formatted as multiple-choice).
Prompting and Sampling For each question on each task, we use the prompt shown in Table 1, modiï¬ed slightly from Bowman et al. (2022). The number of choices varies depending on the task. We sample 100 reasoning samples for each problem using nucleus sampling (Holtzman et al., 2020) with p = 0.95 and temperature 0.8. We then append the prompt for the ï¬nal answer (as in the ï¬nal human turn in Table 1), and we obtain the modelâs next token probabilities for each answer choice. Each reasoning sample is then split | 2307.13702#12 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 13 | 2.3 Prompt Tuning Prompt tuning has demonstrated significant potential in adapting pre-trained LLMs to specific tasks by focusing on tuning the prompt module instead of fine-tuning the entire model [30]. Prefix-Tuning [13] introduces a prompt module with learnable parameters ð out- putting embeddings which are prepended to the embeddings of other inputted tokens. This approach keeps the original training ob- jective intact, while updating only the prefix parameters ð through gradient descent for each task. Another similar technique, referred to as "gisting" [19], compresses arbitrary prompts into a condensed set of virtual "gist" tokens using a meta-learning approach. Building upon T5 [24], Lester et al. [12] proposed a method where the learn- able embeddings of a task-specific prompt are prepended to the encoderâs output. The concatenated embeddings are then passed through the decoder to compute the training objective. This ap- proach enables the model to incorporate task-specific information into the decoding process. Zhou et al. [40] introduced Dual Context- guided Continuous Prompt (DCCP), which employs soft prompt tuning using dual inputs: context-aware prompt and label-aware | 2307.08303#13 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 13 | γ = 1 â 2â5âarange(0,h) â Rh headi = Retention(X, γi) Y = GroupNormh(Concat(head1, · · · , headh)) MSR(X) = (swish(XWG) â Y )WO (8)
where WG, WO â RdmodelÃdmodel are learnable parameters, and GroupNorm [WH18] normalizes the output of each head, following SubLN proposed in [SPP+19]. Notice that the heads use multiple γ scales, which results in different variance statistics. So we normalize the head outputs separately.
The pseudocode of retention is summarized in Figure 4.
4
# Sn
# Output
# def ParallelRetention(
# def RecurrentRetention( | 2307.08621#13 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 13 | 4
approach, we propose a novel cascading table encoder. It divides the information in the table data into two main parts. The first part learns the metadata representation of the table, such as schema, industry background, and the meanings of column names, which can help LLMs understand the global information of the table structure. The second part learns the numerical information representation of the table, such as the distribution and trends of values in different columns, helping LLMs understand the global information of the table numbers like human experts.
We consider the rows and columns of the table as elements of a set and learn the overall representation of the entire set. We use a modified set transformer [16] as the backbone of the table encoder. The set transformer [16], originally designed for dealing with permutation invariant problems, aligns well with the inherent structure of tabular data. We enhance it with an attention mechanism [29] that can capture the interdependencies between different rows or columns of the table, enabling the model to understand the relations between different parts of the table data.
This encoder is pre-trained on ten thousand table datasets using a masked table modeling approach, similar to the masked language modeling used in BERT [5] but adapted to tabular data. The learned table representation not only can be used for table understanding but also can enhance the predictive performance of downstream classifiers. | 2307.08674#13 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 13 | ?For clarity of exposition, we omit the scaling of QK" (typically by 1/d), and optionally elementwise masking on $ and/or dropout applied to P
3
# compute:
compute:
m = max(rowmax($"), rowmax(S))) ⬠R3r (1) (2) ⬠=rowsum(e® '~â) + rowsum(e®~â) ⬠RB P=[PO) PO] =diag(et [e887] e RBBB. @ O=([P PO] ly| = diag(Q) te -"V +.¢ -my(2) ¢ pBrxd
Online softmax instead computes âlocalâ softmax with respect to each block and rescale to get the right output at the end: | 2307.08691#13 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 13 | S â {x â V : G(x, pG) â¥ Ï }. (1)
We achieve θS by finetuning θ on S using an existing IFT framework.
3We also use claude-2 as our response quality evaluator, which can be found in Appendix A.2
3
# Preprint
# System Prompt:
We would like to request your feedback on the performance of AI assistant in response to the instruction and the given input displayed following.
Instruction: [Instruction] Input: [Input] Response: [Response]
# User Prompt:
Please rate according to the [dimension] of the response to the instruction and the input. Each assistant receives a score on a scale of 0 to 5, where a higher score indicates higher level of the [dimension]. Please first output a single line containing the value indicating the scores. In the subsequent line, please provide a explanation of your evaluation, avoiding any potential bias.
# \ ures
Figure 4: Prompt pG to ChatGPT for rating and filtering training data in Eq. (1). | 2307.08701#13 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 13 | Post-hoc reasoning is reasoning which is generated after the conclusion has already been established. In the chain of thought setting the reasoning is sampled before the an- swer is sampled, but this sequential relationship does not imply a causal one. Reasoning not being post-hoc does not guarantee faithfulness, nor does being post-hoc exclude faithfulness. Overall though, if reasoning is not post-hoc, there are fewer ways for it to be unfaithful than there are for reasoning which is post-hoc, including greater test-time compute and steganography which we investigate in this work as well. See Lanham (2022) for further discussion.
To measure post-hoc reasoning, we truncate the chain of thought midway through to observe what answer the model
2For clarity of presentation, many results in the rest of the paper are shown for reasoning samples with three to six steps. These plots are broken down by sample length to avoid graph artifacts due to bucketing.
3
Measuring Faithfulness in Chain-of-Thought Reasoning | 2307.13702#13 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 14 | context representations. This approach leverages both prompt in- formation and contextual understanding to enhance the modelâs performance. The use of multi-task learning techniques can benefit prompt learning in various tasks. For instance, ATTEMPT pro- posed by Wang et al. [37] introduces a multi-task tuning method that transfers knowledge across different tasks through a mixture of soft prompts. In the context of Multilingual Information Retrieval, Huang et al. [8] explores a soft prompt decoding approach that treats retrieval in each language as a separate task while jointly modeling them to capture shared underlying structures. They use decomposable prompts in KD-SPD to model languages, highlight- ing that languages share common features and concepts despite their unique properties. Regarding IR tasks, DPTDR by Tang et al. [31] employs a dual-encoder, utilizing two RoBERTa models for re- trieval. It initializes the dual-encoder through contrastive learning and appends learnable soft prompts for query and document. Both the dual-encoder and the learnable prompts are updated during the training process. | 2307.08303#14 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 14 | The pseudocode of retention is summarized in Figure 4.
4
# Sn
# Output
# def ParallelRetention(
# def RecurrentRetention(
q, # bsz â num_head â len â qk_dim k, # bsz â num_head â len â qk_dim v, # bsz â num_head â len â v_dim decay_mask # num_head â len â len ): retention = q @ k.transpose(â1, â2) retention = retention â decay_mask output = retention @ v output = group_norm(output) return output q, k, v, # bsz â num_head â len â qkv_dim past_kv, # bsz â num_head â qk_dim â v_dim decay # num_head â 1 â 1 ): current_kv = decay â past_kv + k.unsqueeze (â1) â v.unsqueeze(â2) output = torch.sum(q.unsqueeze(â1) â current_kv, dim=â2) output = group_norm(output) return output, current_kv
# def ChunkwiseRetention( | 2307.08621#14 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 14 | Our proposed method presents a significant step forward in the integration of tables, natural language, and commands into LLMs. It provides a comprehensive approach for extracting global representations from tables and enables LLMs to understand and manipulate.
# 2.3 Chain-of-Command
In recognition of the fact that Large Language Models (LLMs) like GPT can struggle with numerical reasoning, prone to computational errors and hallucinations [11], our approach does not require them to operate and calculate within the tables in their latent space. Instead, we provide a series of pre- packaged function commands for LLMs to call upon. LLMs, understanding the global representation of the table and user input, generate a sequence of commands for the backend system to execute, resulting in a modified table. Compared to the SQL statements generated by text2SQL [8, 39, 18], these command sequences are more easily examined and error-located by the backend parsing system, while SQL statements can be challenging to diagnose and correct for specific errors.
However, user queries are often vague and complex, and we can only encapsulate and provide some basic table operation commands. Teaching the LLM to deconstruct complex and vague queries is crucial. For example, a userâs query for a specified object column could be a synonym or translation of a column in the original table, or the user may only have a vague intent and cannot express the demand clearly. | 2307.08674#14 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 14 | ð (1) = rowmax(S(1) ) â Rðµð â (1) = rowsum(ðS(1) âð(1) ËP(1) = diag(â (1) )â1ðS(1) âð(1) â Rðµð Ãðµð O(1) = ËP(1) V(1) = diag(â (1) )â1ðS(1) âð(1) ð (2) = max(ð (1) , rowmax(S(2) )) = ð â (2) = ðð(1) âð(2) â (1) + rowsum(ðS(2) âð(2) ËP(2) = diag(â (2) )â1ðS(2) âð(2) O(2) = diag(â (1) /â (2) )â1O(1) + ËP(2) V(2) = diag(â (2) | 2307.08691#14 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 14 | # \ ures
Figure 4: Prompt pG to ChatGPT for rating and filtering training data in Eq. (1).
2.3 ALPAGASUS: 9K TRAINING DATA FILTERED FROM ALPACA For âdimensionâ in the rating prompt pG shown in Fig. 4, given that âaccuracyâ closely aligns with hu- man expectations of LLMsâ responses, we designate âaccuracyâ as the dimension for rating purposes.4 Correspondingly, we establish Ï in Eq. (1) as an accuracy threshold for the subsequent experiments. The distribution of scores in relation to the 52k Alpaca dataset is presented in Fig. 5.
# Score Distribution
In particular, we choose the threshold Ï = 4.5 according to the score histogram. For the ALPACA dataset V with 52,002 samples, this filtering criterion leads to a subset S of 9,229 samples 5.
32000 30240 22000 16000 wn ons 8000 1550 172 ASSO u ee = Count
3 EXPERIMENTAL SETUP
3.1 FREE-FORM INSTRUCTION EVALUATION | 2307.08701#14 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 14 | 3
Measuring Faithfulness in Chain-of-Thought Reasoning
3-Step CoTs 4-Step Cols 100 kK fo} Vv 80 £ ov a 60 E o oO 40 8 . 5-Step CoTs 6-Step Cols Â¥ 100 a S << 80 vu £ Co 60 n xs 40 oO 25 50 75 100 oO 25 50 75 100 % of Reasoning Sample Provided âe AQua â- MMLU â ARC (Challenge) ©â}â ARC (Easy) â1â Truthful@A â+ OpenBookQA â+â LogiQA âb HellaSwag
the three lowest-AOC tasks (ARC (Easy), ARC (Challenge), and OpenbookQA), the chain of thought changes the ï¬nal answer less than 10% of the time, while for the highest AOC task (AQuA) the chain of thought changes the answer more than 60% of the time. AQuA also consistently shows a low rate of matching the original answer before the ï¬nal two steps of reasoning, suggesting that the amount of post-hoc reasoning on this task is low. | 2307.13702#14 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 15 | In contrast, even though we apply the same Prefix-Tuning [13] method to learn a task-specific soft prompt, we focus on improving the data augmentation for DR tasks, and we propose a novel soft prompt filter method to select high-quality example document- query pairs in the prompt to improve the DR tasks further. The whole augmentation pipeline makes our approach different from the current works.
# 3 SPTAR
Data Soft Prompt Soft Prompt Preparation Tuning Filter Dense Weak Data Soft Prompt Retrieval Filter Augmentor
Figure 1: The pipeline of SPTAR. | 2307.08303#15 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 15 | # def ChunkwiseRetention(
q, k, v, # bsz â num_head â chunk_size â qkv_dim past_kv, # bsz â num_head â qk_dim â v_dim decay_mask, # num_head â chunk_size â chunk_size chunk_decay, # num_head â 1 â 1 inner_decay, # num_head â chunk_size ): retention = q @ k.transpose(â1, â2) retention = retention â decay_mask inner_retention = retention @ v cross_retention = (q @ past_kv) â inner_decay retention = inner_retention + cross_retention output = group_norm(retention) current_kv = chunk_decay â past_kv + k.transpose(â1, â2) @ v return output, current_kv
# Figure 4: Pseudocode for the three computation paradigms of retention. | 2307.08621#15 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 15 | The Chain-of-thought [14, 33] approach emphasizes breaking down complex reasoning into a series of intermediate steps. We introduce the concept of Chain-of-command (CoC), an approach that enhances the chain-of-thought by providing a mechanism for step-by-step instructions associated with these intermediate steps. For instance, when a user asks, "Show me the five movies with the highest profit margin," the LLM first checks if a profit margin column exists in the table. If not, it generates arithmetic instructions to calculate the profit margin using box office and cost data; next, it executes instructions to sort by profit margin in descending order and slice to select the top five movies. When user queries are too vague, like "Give me some numbers," the LLM might struggle to decompose and could refuse execution, instead, it would ask the user for more specific intent.
The aim of the Chain-of-command is to enhance LLMâs reasoning capabilities and robustness when operating table data. This approach involves translating user inputs into a sequence of intermediate command operations, enabling LLMs to manipulate tables more accurately and efficiently sym- bolically. The ability to manipulate symbolic instructions is particularly valuable for real-world applications involving complex and accurate interactions with historical data, such as record-keeping and data analysis in management environments. | 2307.08674#15 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 15 | 3 EXPERIMENTAL SETUP
3.1 FREE-FORM INSTRUCTION EVALUATION
Most instruction-tuned models are evaluated on one test set that might not cover sufficient diverse instructions and thus leads to a risk of biased evaluation (Chia et al., 2023). To conduct a holistic evaluation of ALPAGASUS, we curate our test sets from Self-instruct (Wang et al., 2022), Vicuna (Chiang et al., 2023), WizardLM (Xu et al., 2023), and Koala (Geng et al., 2023), which together can cover more types of instructions and reduce the evaluation bias. Details of these four test sets are provided in Table 1.
# Figure 5: Histogram of Scores (Al- paca Dataset).
3.2 BASELINE MODELS
We compare our ALPAGASUS with the following four recent LLMs.
(Taori et al., 2023) is an open-sourced model ALPACA developed by Stanford University through IFT of LLaMA on a training dataset of 52,002 (instruction, input, re- sponse) samples with the responses generated by Text- Davinci-003 (teacher).
Test Set # Samples Category Koala Vicuna WizardLM Self-Instruct 180 80 218 252 â â
Table 1: Four test sets used in this paper. | 2307.08701#15 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 15 | Surprisingly, the amount of post-hoc reasoning per task (measured by AOC) also shows little correlation with the performance gain from chain of thought. For example, the accuracy boost that LogiQA gets from CoT is neglible, but it is second in AOC only to AQuA. HellaSwag shows an accuracy drop (-4.69%) but shows less post-hoc reasoning on AOC relative to 4 other tasks which show an accuracy gain from CoT. These results suggest that CoT may be faithful even when it does not improve task performance.
Figure 3. Chance of giving the same answer as the complete CoT after truncating the CoT at different points.
# 2.4. Adding Mistakes: Does Editing the Chain of Thought Change the Predicted Answer?
would give without the complete reasoning statement. If the model is no longer updating its answer based on further steps of the chain of thought, it stands to reason that the produced reasoning is post-hoc, having been produced after the conclusion was already inevitable. | 2307.13702#15 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 16 | # 3 SPTAR
Data Soft Prompt Soft Prompt Preparation Tuning Filter Dense Weak Data Soft Prompt Retrieval Filter Augmentor
Figure 1: The pipeline of SPTAR.
As shown in Figure 1, SPTAR mainly consists of six modules: a) data preparation; b) soft prompt tuning; c) soft prompt filter; d) prompt augmentor; e) weak data filter; f) DR. In Section 3.1, we elaborate on how to generate the training and evaluation datasets of soft prompt tuning. With the training and evaluation datasets, we conduct soft prompt tuning (Section 3.2) to learn a task-specific soft prompt. To further improve the quality of the weak generated queries, we explore the soft prompt filter in Section 3.3 to find high- quality example document-query pairs for the learned task-specific soft prompt. We then prompt LLMs to generate weak queries for unlabeled documents, yielding enough training data to train DR (Section 3.4). Finally, we train the DR (Section 2.1) models on filtered weak data. | 2307.08303#16 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 16 | # Figure 4: Pseudocode for the three computation paradigms of retention.
Retention Score Normalization We utilize the scale-invariant nature of GroupNorm to im- prove the numerical precision of retention layers. Specifically, multiplying a scalar value within GroupNorm does not affect outputs and backward gradients, i.e., GroupNorm(a * head;) = GroupNorm(head;). We implement three normalization factors in Equation (5). First, we normalize QKT as @K"/Vd. Second, we replace D with Dam = Dnm/,/S*_, Dri. Third, let R denote the retention scores R = QKT © D, we normalize it as Rim = Rnm/max(| O21 Rnil.1). Then the retention output becomes Retention(X) = RV. The above tricks do not affect the final results while stabilizing the numerical flow of both forward and backward passes, because of the scale-invariant property.
# 2.3 Overall Architecture of Retention Networks | 2307.08621#16 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 16 | To enhance the performance and stability of our approach, we constructed a substantial dataset of command chain instructions while fine-tuning LLMs to adapt to commands, and employed contextual learning to provide prompts for multiple steps in the command chain sequence. A strong and accurate command chain process allows LLMs to better reason about table data and handle more complex scenarios.
5
The Chain-of-command approach has three main advantages. First, it enables LLMs to execute complex table instructions accurately, thereby enhancing their multi-hop reasoning capabilities for table operations. Second, by breaking down complex operations into a series of intermediate table operations, the chain-of-command method enhances the LLMâs ability to handle complex multi-table interactions. Lastly, it enables LLMs to refuse overly vague instructions and ask users for more specific intent. This approach allows LLMs to handle edge cases and unexpected scenarios better, making it a promising method for real-world applications.
# 2.4 Domain Data Processing Pipeline
Despite the broad knowledge and dialogue capabilities of large language models (LLMs) due to extensive pre-training on a diverse corpus, their performance often falls short in addressing the nuanced language styles and logic of specific industries. This is primarily due to the lack of exposure to proprietary domain data during their training phase. To mitigate this issue, we have developed an efficient domain data processing pipeline [3, 35]. | 2307.08674#16 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 16 | We show how FlashAttention uses online softmax to enable tiling (Fig. 1) to reduce memory reads/writes.
ya Stored in HBM (~~! Computed in SRAM (==! ot materaticed in BM) y2 1@) = Dy exp(s®), 12 = 1 + Y exp(S®), 7 7 Output A® a =f _ . yo o mY 3 ad 55 o® = @ o® Rescaling to Aa correct tia: ve denominator
Figure 1: Diagram of how FlashAttention forward pass is performed, when the key K is partitioned into two blocks and the value V is also partitioned into two blocks. By computing attention with respect to each block and rescaling the output, we get the right answer at the end, while avoiding expensive memory reads/writes of the intermediate matrices S and P. We simplify the diagram, omitting the step in softmax that subtracts each element by the row-wise max.
4
# 2.3.2 Backward pass | 2307.08691#16 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 16 | Test Set # Samples Category Koala Vicuna WizardLM Self-Instruct 180 80 218 252 â â
Table 1: Four test sets used in this paper.
TEXT-DAVINCI-003 increased emphasis on contextual understanding and response accuracy. Its proficiency in capturing complex linguistic patterns makes it a powerful teacher LLM for generating high-quality training data for finetuning LLMs such as ALPACA.
4We defer the experiment of other dimensions, e.g., helpfulness, to the Appendix A.5. 552k denotes 52002 samples from the original Alpaca training set and 9k represents 9229 data samples.
(either randomly sampled or filtered in our experiments)
4
Preprint
CHATGPT (OpenAI, 2023a) is an AI chatbot finetuned via reinforcement learning with human feedback (RLHF). It exhibits exceptional capability across a wide range of tasks and might be the most popular chatbot recently. Hence, it would be interesting to study to what extent ALPAGASUS can match its performance.
CLAUDE (Bai et al., 2022) is an AI chatbot developed by Anthropic. It was finetuned by RLHF to align with humansâ preference on three dimensions, i.e., helpful, honest, and harmless. We use Claude- v1.1 for comparison, which is comparable to ChatGPT on the AlpacaEval (Li et al., 2023). | 2307.08701#16 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 16 | For these experiments, we truncate the previously collected reasoning samples and prompt the model to answer the question with the partial chain of thought rather than the complete one. For each chain of thought collected, we truncate it after each step (here, each sentence) of the sample. So starting with a chain of thought [x1, x2, x3, ..., xn], we truncate it to an empty string [], truncate it to one sentence [x1], truncate it to two sentences [x1, x2], and so on. Each of the truncated chains of thought replaces the original CoT in the sample, and the model is prompted to answer as before.
Having collected answers after each truncation of the CoT, we measure how often the model comes to the same con- clusion as it did with the complete CoT. If the amount of matching overall is low, this indicates that less of the rea- soning is post-hoc.
2.3.1. EARLY ANSWERING RESULTS
Fig. 3 shows the results. From these results, we also cal- culate an area over the curve (AOC) metric for all CoT lengths of each task, presented in Table 2. AOC values are calculated as a weighted sum, where the AOC for each CoT length is weighted by the fraction of CoT samples having that length. | 2307.13702#16 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 17 | # 2.3 Overall Architecture of Retention Networks
For an L-layer retention network, we stack multi-scale retention (MSR) and feed-forward network (FFN) to build the model. Formally, the input sequence {xi}|x| i=1 is transformed to vectors by a word embedding layer. We use the packed embeddings X 0 = [x1, · · · , x|x|] â R|x|Ãdmodel as the input and compute the model output X L:
Y l = MSR(LN(X l)) + X l X l+1 = FFN(LN(Y l)) + Y l (9)
where LN(·) is LayerNorm [BKH16]. The FFN part is computed as FFN(X) = gelu(XW1)W2, where W1, W2 are parameter matrices.
Training We use the parallel (Equation (5)) and chunkwise recurrent (Equation (7)) representations during the training process. The parallelization within sequences or chunks efficiently utilizes GPUs to accelerate computation. More favorably, chunkwise recurrence is especially useful for long-sequence training, which is efficient in terms of both FLOPs and memory consumption.
5 | 2307.08621#17 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 17 | Motivated by the goal to streamline the fine-tuning process of LLMs with minimal computational overhead and accelerated model iteration, our pipeline is designed to harness the power of active learning [26]. Through this, we curate a carefully selected set of fine-tuning examples from the domain data, allowing LLMs to achieve superior fine-tuning results with a reduced number of examples. This strategic utilization of resources expedites the modelâs learning process, thereby speeding up its iteration.
Additionally, we have fortified the document retrieval capabilities of LLMs. We utilize technologies like vector databases [30] and LangChain [15] to facilitate the retrieval of pertinent information from a plethora of proprietary documents, further enriching the context that LLMs learn from.
In essence, our pipeline serves as a catalyst for the rapid and cost-effective adaptation of LLMs to the data needs of various specific industries. This pipeline not only addresses the challenges of industry-specific language styles and logic but also empowers LLMs to handle commands that interact with tables, integrating the realms of natural language, tables, and commands.
# 3 Evaluation
# 3.1 Commands supported by TableGPT | 2307.08674#17 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 17 | 4
# 2.3.2 Backward pass
In the backward pass, by re-computing the values of the attention matrices S and P once blocks of inputs Q, K, V are already loaded to SRAM, FlashAttention avoids having to store large intermediate values. By not having to save the large matrices S and P of size ð Ã ð, FlashAttention yields 10-20Ã memory saving depending on sequence length (memory required in linear in sequence length ð instead of quadratic). The backward pass also achieves 2-4Ã wall-clock speedup due to reduce memory reads/writes.
The backward pass applies tiling to the equations in Section 2.2. Though the backward pass is simpler than the forward pass conceptually (there is no softmax rescaling), the implementation is signiï¬cantly more involved. This is because there are more values to be kept in SRAM to perform 5 matrix multiples in the backward pass, compared to just 2 matrix multiples in the forward pass.
# 3 FlashAttention-2: Algorithm, Parallelism, and Work Parti# tioning | 2307.08691#17 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 17 | 3.3 EVALUATION METRICS
The evaluation of the instruction-following capability of LLMs is usually challenging due to the existence of multiple eligible responses to one instruction and the difficulty of reproducing human evaluations. In light of the recent advancements in automated evaluation (Dubois et al., 2023; Zheng et al., 2023; Chiang et al., 2023), which offer superior scalability and explainability than human studies, we also apply an API LLM J(·) (e.g., GPT-4) as the judge to evaluate θS and compare it with θV . In particular, we apply J(·) to compare the responses of θS and θV to each instruction z drawn from a test set D. Let F (z; θV ) and F (z; θS) denote the two modelsâ responses to instruction z â D, the judge outputs a score for each response and we aim to achieve a higher score on θS, i.e.,
J(F (z; θS)) ⥠J(F (z; θV )) (2) | 2307.08701#17 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 17 | There is wide variation in the extent of post-hoc reasoning between tasks as measured by this experiment. Notably, for
We take another approach to testing whether the reasoning is post-hoc (as in §2.3), by directly perturbing the chain of thought by adding a mistake and observing the outcome. If inserting a mistake into the CoT changes the modelâs ï¬nal answer, then the model is likely not ignoring the CoT.
In this experiment, we introduce a mistake into one step of the CoT and then sample a continued CoT from that point forward. To generate mistakes, we use a pretrained model (described in §2.1) to generate a mistaken version of a sin- gle sentence from the original CoT using a few shot prompt (see Appendix A for details). We then sample a (nominally) mistaken version of that sentence, sampling a maximum of 30 tokens. We replace the model-generated reasoning in the prompt (Table 1) with the original chain of thought until the point where the error was introduced, followed by the sampled mistaken step [x1, 2, ...,7/]. We continue to sample the chain of thought from that point forward, using the model and prompt used for generating the original rea- soning sample. We then prompt for a final answer given the reasoning sample as before. Table 3 contains an example. | 2307.13702#17 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 18 | 3
each query ðð, there is a relevant document ðð. There may exist duplicated queries as one query may have multiple relevant doc- uments. This domain-specific dataset ð· is categorized into train, test, and evaluation subsets, denoted as ð·ð¡ðððð, ð·ð¡ðð ð¡ , and ð·ðð£ðð , respectively. Apart from dataset ð·, there is a much bigger docu- ment collection ð¶ which contains all the documents in ð· but have more unlabeled documents denoted as ð¶ð¢ðððððððð . After training, DR encodes all the documents in ð¶ into vectors. When new query comes in, DR encodes the query into an vector and search the top-ð similar documents in vector space. | 2307.08303#18 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 18 | 5
Architectures Transformer Linear Transformer Recurrent NN RWKV H3/S4 Hyena RetNet Training Parallelization â â â â â â â Inference Cost O(N ) O(1) O(1) O(1) O(1) O(N ) O(1) Long-Sequence Memory Complexity O(N 2) O(N ) O(N ) O(N ) O(N log N ) O(N log N ) O(N ) Performance ââ â â â â â ââ
Table 1: Model comparison from various perspectives. RetNet achieves training parallelization, constant inference cost, linear long-sequence memory complexity, and good performance.
Inference The recurrent representation (Equation (6)) is employed during the inference, which nicely fits autoregressive decoding. The O(1) complexity reduces memory and inference latency while achieving equivalent results.
# 2.4 Relation to and Differences from Previous Methods
Table 1 compares RetNet with previous methods from various perspectives. The comparison results echo the âimpossible triangleâ presented in Figure 2. Moreover, RetNet has linear memory complexity for long sequences due to the chunkwise recurrent representation. We also summarize the comparisons with specific methods as follows. | 2307.08621#18 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 18 | # 3 Evaluation
# 3.1 Commands supported by TableGPT
To unleash the power of TableGPT, we have designed and supported a rich set of commands. Firstly, TableGPT enables natural language interaction with tables, empowering users to intuitively query, filter, sort, and aggregate data using everyday language. It also facilitates tasks such as data visualization and report generation, enhancing the interpretability and presentation of tabular information. Lastly, TableGPT facilitates automated decision-making processes, empowering users to make predictions, forecast trends, and estimate outcomes using table data and natural language instructions.
Note that when the intent of the user query is too vague, TableGPT will reject to generate commands and instead ask the user for more detailed intent. This is one of the benefits of chain-of-command, the ability to think about the rationality of commands like a human expert, rather than a rigid command translator.
# 3.2 Comparison with previous command-using LLMs | 2307.08674#18 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 18 | # 3 FlashAttention-2: Algorithm, Parallelism, and Work Parti# tioning
We describe the FlashAttention-2 algorithm, which includes several tweaks to FlashAttention to reduce the number of non-matmul FLOPs. We then describe how to parallelize the computation on diï¬erent thread blocks to make full use the GPU resources. Finally we describe we partition the work between diï¬erent warps within one thread block to reduce the amount of shared memory access. These improvements lead to 2-3à speedup as validated in Section 4.
# 3.1 Algorithm | 2307.08691#18 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 18 | J(F (z; θS)) ⥠J(F (z; θV )) (2)
for most z â D. In our experiments, we include both modelsâ responses in the input to the judge (e.g., GPT-4), followed by an instruction to the judge, which aims to rate the responses with a score between 1 and 10. Details of the input and prompt to the judge can be found in Appendix C6
Since there exists position bias within LLM judges, which refers to a phenomenon where LLM judges have tendencies to prefer specific positions over others (Wang et al., 2018; Ko et al., 2020; Wang et al., 2023), to mitigate it, we try both orders (i.e., placing ALPAGASUSâs response before/after the baseline modelâs response) and define the final judge of âWin-Tie-Loseâ to be:
1. Win: ALPAGASUS wins twice, or wins once and draws once.
2. Tie: ALPAGASUS draws twice, or wins once and loses once.
3. Lose: ALPAGASUS loses twice, or loses once and draws once. | 2307.08701#18 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 18 | Qualitatively, we ï¬nd that our mistake generating process generates a plausible mistake at least 80% of the time. We also ï¬nd that when the modelâs chain of thought leads it to an incorrect answer that is not present in the answer choices, it will often choose the answer choice that is closest to that answer (as in the example in Table 3).
Similarly to the early answering experiment (§2.3), we mea- sure how often the ï¬nal answer changes after the mistake has been added and the remaining CoT has been sampled. A low matching frequency indicates less post-hoc reasoning. We also calculate an AOC value for each task, as in §2.3.
4
Measuring Faithfulness in Chain-of-Thought Reasoning | 2307.13702#18 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 19 | We random rample document-query pairs from origianl training dataset ð·ð¡ðððð to construct the training and evaluation datasets for the soft prompt module, namely ðð ðð£ðð where X and Y represent the number of different queiers in training and evaluation datasets respectively. We define ðð¢ð(ð¥) represent the number of document-query pairs when there are ð¥ different queries in the dataset, so, the number of document-query pairs in ðð ð¡ðððð is ðð¢ð(ð ). Similarly, ðð ðð£ðð has ðð¢ð(ð ) document-query pairs. For instance, in our experiment, we randomly sample 50 unique queries and their corresponding documents from the training dataset ðð¡ðððð to form ð50 ð¡ðððð (ð = 50). From the | 2307.08303#19 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 19 | Transformer The parallel representation of retention shares similar spirits as Transform- ers [VSP+17]. The most related Transformer variant is Lex Transformer [SDP+22] which im- plements xPos as position embeddings. As described in Equation (3), the derivation of retention aligns with xPos. In comparison with attention, retention removes softmax and enables recurrent formulation, which significantly benefits inference.
S4 Unlike Equation (2), if Qn and Kn are content-unaware, the formulation can be degenerated to S4 [GGR21], where O = (QK âº, QAK âº, .., QA|x|â1K âº) â V .
Linear Attention The variants typically use various kernels $(4:)¢(3)/S7!*!, 4(q:)4(kn) to replace the softmax function. However, linear attention struggles to effectively encode position information, rendering the models less performant. Besides, we reexamine sequence modeling from scratch, rather than aiming at approximating softmax. | 2307.08621#19 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 19 | # 3.2 Comparison with previous command-using LLMs
Several existing solutions attempt to combine tables and language models, such as ChatExcel [28], SheetCopilot [17], and Data-Copilot [38]. These approaches typically rely on using prompts to invoke pre-defined external commands through inference API of LLMs, such as OpenAI API2. In contrast, TableGPT takes a different approach by fine-tuning LLM specifically for table-related tasks. This key distinction allows us to harness the inherent capabilities of the LLM architecture while tailoring it to excel in table processing tasks. A detailed comparison of TableGPT with the previous command-using LLMs is shown in Table 1.
# 2https://openai.com/blog/openai-api
6
# 3.3 Case Study
We show some cases in Figure 2 - 8. More examples will be released soon.
# 4 Conclusion | 2307.08674#19 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 19 | # 3.1 Algorithm
We tweak the algorithm from FlashAttention to reduce the number of non-matmul FLOPs. This is because modern GPUs have specialized compute units (e.g., Tensor Cores on Nvidia GPUs) that makes matmul much faster. As an example, the A100 GPU has a max theoretical throughput of 312 TFLOPs/s of FP16/BF16 matmul, but only 19.5 TFLOPs/s of non-matmul FP32. Another way to think about this is that each non-matmul FLOP is 16Ã more expensive than a matmul FLOP. To maintain high throughput (e.g., more than 50% of the maximum theoretical TFLOPs/s), we want to spend as much time on matmul FLOPs as possible.
# 3.1.1 Forward pass
We revisit the online softmax trick as shown in Section 2.3 and make two minor tweaks to reduce non-matmul FLOPs:
1. We do not have to rescale both terms of the output update by diag(â (2) )â1: | 2307.08691#19 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 19 | 2. Tie: ALPAGASUS draws twice, or wins once and loses once.
3. Lose: ALPAGASUS loses twice, or loses once and draws once.
To avoid cut-off responses, we allow models to generate up to 1024 tokens. For ChatGPT, Claude, and Text-Davinci-003, we set the temperature to 0.0, respectively, to reduce randomness and ensure a fair comparison.
4 EXPERIMENTAL RESULTS
4.1 QUALITY MATTERS MORE THAN QUANTITY
7B: AlpaGasus-9k vs. Alpaca-52k 13B: AlpaGasus-9k vs. Alpaca-52k Vieuna m AlpaGasus(9k) wins Vieuna aTie WizardLM WizardLM Instruct Instruct
Figure 6: Main results: comparing ALPAGASUS and ALPACA on their 7B and 13B models. ALPAGASUS-9k achieves much better performance than ALPACA-52k on all four test sets: Vicuna, Koala, Self-Instruct, and WizardLM.
6To address potential concerns regarding bias in the evaluation prompts, we also present results of using alternative evaluation prompts in Appendix A.1.
5
Preprint | 2307.08701#19 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 19 | 4
Measuring Faithfulness in Chain-of-Thought Reasoning
AOC Accuracy Task Early Answering Adding Mistakes Without CoT With CoT Accuracy difference AQuA LogiQA MMLU HellaSwag TruthfulQA OpenBookQA ARC (Challenge) ARC (Easy) 0.44 0.26 0.12 0.12 0.11 0.07 0.05 0.02 0.52 0.31 0.21 0.23 0.20 0.15 0.11 0.07 28 42 68 71 59 82 88 96 43 43 71 66 63 84 90 96 15.32 1.02 3.77 -4.69 4.38 2.71 2.28 0.77
Table 2. Faithfulness and performance metrics for the tasks that we study. Tasks are sorted by early answering AOC, a measure of post-hoc reasoning (higher is less post-hoc, indicating greater faithfulness). AOC indicates area over the curve for the early answering and adding mistakes experiments respectively, weighted by the representation of each CoT length.
# Add Mistakes Example
Human: Question: 30% of all Huhulians own at least one TV. 24% of Huhulians who own at least one TV own at least four TVâs. What percent of Huhulians own at least four TVâs? | 2307.13702#19 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 20 | to form ð50 ð¡ðððð (ð = 50). From the remaining data in ðð¡ðððð, we randomly select 100 unique queries and their associated documents to compose ð100 ð¡ðððð primarily serves for optimizing the soft prompt, while the data in ð100 ðð£ðð is employed to assess the modelâs convergence, enabling us to terminate the training process in advance and mitigate overfitting risks. We also tried other values of ð , and the influence of ð is studied in Section 5.2.5. | 2307.08303#20 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 20 | AFT/RWKV Attention Free Transformer (AFT) simplifies dot-product attention to element-wise operations and moves softmax to key vectors. RWKV replaces AFTâs position embeddings with exponential decay and runs the models recurrently for training and inference. In comparison, retention preserves high-dimensional states to encode sequence information, which contributes to expressive ability and better performance.
xPos/RoPE Compared with relative position embedding methods proposed for Transformers, Equation (3) presents a similar formulation as xPos [SDP+22] and RoPE [SLP+21].
Sub-LayerNorm As shown in Equation (8), the retention layer uses Sub-LayerNorm [WMH+22] to normalize outputs. Because the multi-scale modeling leads to different variances for the heads, we replace the original LayerNorm with GroupNorm.
# 3 Experiments
We conduct experiments on language modeling to evaluate RetNet. We evaluate the proposed architecture with various benchmarks, i.e., language modeling performance, and zero-/few-shot learning on downstream tasks. Moreover, for training and inference, we compare speed, memory consumption, and latency.
6 | 2307.08621#20 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 20 | 6
# 3.3 Case Study
We show some cases in Figure 2 - 8. More examples will be released soon.
# 4 Conclusion
We present TableGPT, a large language model designed for table analysis, unifying tables, nature language, and commands. It enables a variety of functions like answering questions, manipulating data, visualizing information, generating analysis reports, and making predictions. Technically, TableGPT addresses several major challenges in developing a natural language-driven framework for table data processing, including comprehensive table understanding, instruction chain generation, and domain-specific fine-tuning. We believe TableGPT has the potential to reshape the landscape of tabular data processing, accelerating the efficiency of table modeling and exploratory data analysis (EDA), and empowering various domains like finance, transportation, scientific research, etc.
Table GPT HIN, BEIT MLLIeRCSVaIE 50Â¥ ROMEO TINERSR 7 EeCOMO HES, RR ARO, RELUMASOFLEATOSD AEN TNTRND. GRE, BOEMMA NESTA, WER, FSONEDIS,
Figure 2: Cases of TableGPT.
7
@ TablecPt BRREE PEN Emons, ire Rem #5, SENET SORHRHEMHMRUOMTSER,
Figure 3: Cases of TableGPT. | 2307.08674#20 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 20 | 1. We do not have to rescale both terms of the output update by diag(â (2) )â1:
O(2) = diag(â (1) /â (2) )â1O(1) + diag(â (2) )â1ðS(2) âð(2)
We can instead maintain an âun-scaledâ version of O(2) and keep around the statistics â (2) :
ËO(2) = diag(â (1) )â1O(1) + ðS(2) âð(2)
my),
Only at the every end of the loop do we scale the ï¬nal ËO(last) by diag(â (last) )â1 to get the right output.
2. We do not have to save both the max ð ( ð) and the sum of exponentials â ( ð) for the backward pass. We
only need to store the logsumexp ð¿ ( ð) = ð ( ð) + log(â ( ð) ).
5 | 2307.08691#20 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 20 | 6To address potential concerns regarding bias in the evaluation prompts, we also present results of using alternative evaluation prompts in Appendix A.1.
5
Preprint
AlpaGasus-9k vs. Alpaca-52k We compare ALPAGASUS and ALPACA on two sizes of models in Fig. 6. They only differ in the training data: ALPACA uses all the 52k data while ALPAGASUS only uses 9k data selected from the 52k. Their hyperparameters and training scripts are the same. As shown in the evaluation results, ALPAGASUS significantly outperforms the original ALPACA across all four test sets. Moreover, when using LLaMA-2 as the base model, we observe consistent outcomes (See Appendix A.3). This consistency underscores the universality of our data filtering method, irrespective of the model choices. These findings also confirm that our training data selection approach leads to superior performance even when the selected training data are only 17.75% of the original dataset.
7B: AlpaGasus-9k vs. Alpaca-9k-Random = AlpaGasus wins Vicuna 13B: AlpaGasus-9k vs. Alpaca-9k-Random Vicuna a Tie = Alpaca wins Koala Koala WizardLM WizardLM Self- Instruct Self- Instruct | 2307.08701#20 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 20 | Choices: (A): .084% (B): 24% (C): 4.67% (D): 7.2% (E): 15.6%
Assistant: Letâs think step by step: 30% of Huhulians own at least one TV. And 20% of those people who own at least one TV own four or more TVâs. So 20% of 30% of Huhulians own at least four TVâs, which is 6%. The correct answer is therefore choice C, 4.67%. Human: Given all of the above, whatâs the sin- gle, most likely answer?
3-Step CoTs 4-Step Cols 100 = = 3 * a ââ a o £ 2 60 ° aT a o 40 9D = 5-Step CoTs 6-Step CoTs 2 100 & a vu ⬠80 Co wn x 60 pe wo ea oO 20 40 60 80 oO 20 40 60 80 % of Reasoning Sample Before Mistake âe AQua â- MMLU â ARC (Challenge) ©â}â ARC (Easy) â1â TruthfulQA â+ OpenBookQA â+â LogiQA âb HellaSwag
Assistant: The single, most likely answer is (C) | 2307.13702#20 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 21 | 3.2 Soft Prompt Tuning In our soft prompt tuning module, we apply the same prompt tuning method Prefix-Tuning [13] and we implemented our soft prompt tuning module based on a public prompt tuning package PEFT [18]. Figure 2 (b) illustrates the soft prompt tuning module, where the red boxes represent the parameters ð to be optimized during model training and the green boxes represent LLMâs original parameters Φ that are retained. Let ððððð (·) denotes the promptâs embedding layer parametered by ðððð .
Document: ð1 Relevant Query: ð1 ... Document: ðð Relevant Query: ðð Document: ð ð Relevant Query: Table 1: The format of ð ð | 2307.08303#21 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 21 | 6
Size Hidden Dim. #Layers Batch Size # Tokens Learning Rate 1.3B 2.7B 6.7B 2048 2560 4096 24 32 32 4M 4M 4M 100B 100B 100B 6 Ã 10â4 3 Ã 10â4 3 Ã 10â4
Table 2: Sizes, and learning hyper-parameters of the models in language modeling experiments.
15.0 a 2145 614.0 3135 $ 13.0 12.5 ==e== RetNet â= Transformer 1.3B 2.7B 6.7B Model Size
Figure 5: Perplexity decreases along with scaling up the model size. We empirically observe that RetNet tends to outperform Transformer when the model size is larger than 2B.
# 3.1 Setup | 2307.08621#21 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 21 | @ TablecPt BRREE PEN Emons, ire Rem #5, SENET SORHRHEMHMRUOMTSER,
Figure 3: Cases of TableGPT.
@ Tablecpt 7780268107 raasans7a aso.083
Figure 4: Cases of TableGPT.
8
Table GPT 9, RENT mem: ema Emotes, mH Fearn
Figure 5: Cases of TableGPT.
Table GPT Be NemMNERAE MERA? ME, mR
Figure 6: Cases of TableGPT.
9
@ Table GPT vgsales.csv vosales.csv SFR, URRNT ME ERICSV KF Et ERS EMMA GR MOG WRDUTES, WTR
Figure 7: Cases of TableGPT.
@ Table GPT . vgsales.csv SFO, WUT HL RICSVKF skz Nintendo ttt Z 2 22 09 2? PARADES, WHR fEHETM datacsy ââ¢~ ©e > HERE, NintendoMAKRAHAVE PSS, RPSS-KBAHMSM AH (RPG) . CePA) APM (SHAS) AP). WH, Nintendom4kMiaKHirSeTaAme SPD, CORNEAL, FRAME ADINIANR,
Figure 8: Cases of TableGPT.
10
# References | 2307.08674#21 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 21 | Figure 7: Comparing ALPAGASUS with LLaMA finetuned on randomly selected data.
Quality-Guided Filtering vs. Random Filtering To investigate the efficacy of our data selection strategy, we compare ALPAGASUS with LLaMA models fine-tuned on a randomly sampled subset of the ALPACA 52k data, denoted by ALPACA-9k-random in Fig. 7. Both models start from the same initial model (i.e., LLaMA) and are then finetuned on the same number of samples (i.e., 9k). They only differ in terms of the data selection criteria. In Fig. 7, we compare the two types of models under two model sizes, i.e., 7B and 13B. ALPAGASUS-9k significantly outperforms ALPACA-9k-random, showing the high quality of our selected data and their importance to the performance of IFT. | 2307.08701#21 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 21 | Assistant: The single, most likely answer is (C)
Figure 4. Chance of giving the same answer with the original CoT vs. CoT with an LLM-introduced mistake in it.
Table 3. Add-mistakes sample from AQuA example shown in §2.1. The introduced mistake is underlined. The subsequent reasoning was sampled after the addition of the underlined mistake.
# 2.4.1. ADDING MISTAKES RESULTS
Fig. 4 and Table 2 show the results of this experiment. Largely these agree with the results from the early answer- ing experiment (§2.3). The AOC results also broadly agree with the results in Fig. 4, where the ordering of the lines for all CoT lengths generally reï¬ects the same ranking as the AOC scores. The ordering of tasks ranked by AOC is nearly identical, with HellaSwag scoring higher than MMLU as the only difference between them. As with the early an- swering experiments, we also observe a similar high-level ï¬nding: the extent of post-hoc reasoning varies consider- ably between tasks, and it is not strongly correlated with the accuracy improvement conferred by CoT. | 2307.13702#21 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 22 | For each training epoch, we first random sample ð document- query pairs from training dataset ðð ð¡ðððð as example document- query pairs (ðð, ðð)ð ð=1, then iterate over the left document-query ðð¢ð (ð ) âð pairs (ð ð , ð ð ) to compute loss. As shown in Table 1, each ð=1 pair (ð ð , ð ð ) is concatenated with example pairs (ðð, ðð)ð ð=1 by keywords like "document" and "relevant query" as ð ð . We initialize a | 2307.08303#22 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 22 | # 3.1 Setup
Parameter Allocation We re-allocate the parameters in MSR and FFN for fair comparisons. Let d denote dmodel for simplicity here. In Transformers, there are about 4d2 parameters in self-attention where WQ, WK, WV , WO â RdÃd, and 8d2 parameters in FFN where the intermediate dimension is 4d. In comparison, RetNet has 8d2 parameters in retention, where WQ, WK â RdÃd, WG, WV â RdÃ2d, WO â R2dÃd. Notice that the head dimension of V is twice Q, K. The widened dimension is projected back to d by WO. In order to keep the parameter number the same as Transformer, the FFN intermediate dimension in RetNet is 2d. Meanwhile, we set the head dimension to 256 in our experiments, i.e., 256 for queries and keys, and 512 for values. For fair comparison, we keep γ identical among different model sizes, where γ = 1 â elinspace(log 1/32,log 1/512,h) â Rh instead of the default value in Equation (8). | 2307.08621#22 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 22 | Figure 8: Cases of TableGPT.
10
# References
[1] Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, and Michael Auli. wav2vec 2.0: A framework for self-supervised learning of speech representations, 2020.
[2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. Language models are few-shot learners. Advances in neural information processing systems, 33:1877â1901, 2020.
[3] Hao Chen, Yiming Zhang, Qi Zhang, Hantao Yang, Xiaomeng Hu, Xuetao Ma, Yifan Yanggong, and Junbo Zhao. Maybe only 0.5% data is needed: A preliminary exploration of low training data instruction tuning, 2023.
[4] Zhihong Chen, Feng Jiang, Junying Chen, Tiannan Wang, Fei Yu, Guiming Chen, Hongbo Zhang, Juhao Liang, Chen Zhang, Zhiyi Zhang, et al. Phoenix: Democratizing chatgpt across languages. arXiv preprint arXiv:2304.10453, 2023. | 2307.08674#22 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 22 | ð (1) = rowmax(S(1) ) â Rðµð â (1) = rowsum(ðS(1) âð(1) ËO(1) = ðS(1) âð(1) ð (2) = max(ð (1) , rowmax(S(2) )) = ð â (2) = ðð(1) âð(2) â (1) + rowsum(ðS(2) âð(2) ËP(2) = diag(â (2) )â1ðS(2) âð(2) ËO(2) = diag(ðð(1) âð(2) O(2) = diag(â (2) )â1 ËO(2) = O. ) â Rðµð V(1) â Rðµð Ãð )â1 ËO(1) + ðS(2) âð(2) ) = rowsum(ðS(1) âð) + | 2307.08691#22 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 22 | 4.2 HOW MUCH DATA SHOULD BE FILTERED? Threshold Ï of data filtering. In Eq. (1), we select data with scoreâ¥ Ï and we set Ï = 4.5 in our main experiments, which results in 9k out of the 52k data to finetune ALPAGASUS. To study the impact of the threshold Ï on IFT, we com- pare ALPAGASUS with LLaMA finetuned on 39k data selected by applying a lower threshold of Ï = 4.0. We report the comparison results in Fig. 8. When tested on the Koala and WizardLM test sets, ALPACA-39k model outperforms the original ALPACA-52k model. However, when using the Vicuna and Self-Instruct as test sets, ALPACA-39k does not exhibit advantages over the original ALPACA-52k model. Hence, a loose criterion (a lower threshold) includes more data in the selected data and a model with comparable performance as the original ALPACA. However, it still performs poorer than ALPAGASUS trained on much fewer but higher-quality data, indicating the negative impact of low-quality data to IFT. | 2307.08701#22 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 22 | For both early answering and adding mistakes experiments, AQuA and LogiQA are the two tasks with the most faithful reasoning (by some margin). The increased faithfulness may be due to the modelsâ limited ability to do the task without CoT, which may cause the model to rely more on CoT. In §3, we ï¬nd that the per-task faithfulness depends on the capabilities of the model used (e.g., on the modelâs size), which supports this hypothesis. Another potential cause for the increased faithfulness on these tasks is that they both involve logical reasoning, so it may be more clear that the modelâs ï¬nal prediction should be entailed by the stated reasoning. In §3, we ï¬nd that faithfulness does not depend on the task alone, casting some doubt on this hypothesis.
5
Measuring Faithfulness in Chain-of-Thought Reasoning
# Paraphrasing example
Accuracy a $ 0 20 40 60 80 100 Filler Tokens Length Percentile be Agua âb ARC (Challenge) = Truthfulga = Logiga â MMW âb ARC (Easy) ---. Accuracy given no CoT bE OpenBookQA =â- HellaSwag | 2307.13702#22 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 23 | (a) Shared LLM (b) Soft Prompt Tuning [True Query (q:) : which country invests the most in indonesia Learnable Parameters 6 Freeze Parameters ® Data (Section 3.1) LLM LLM's Embe edding Layer Prompt's Embedding Layer K LN Tokenizer =~ rs rs ry plato govorate aor ors dose) aiza) = fiae ae) (4) initialized Prompt s with length 1, D-Q Pairs Doc to be Labeled 5x train [Weak Relevant Query (q,) : what] country invests the most in indonesia Ly Weak Query: Generation [Document (d;) : The function of a Imotor neuron is to carry an. jelectrical signal to a muscle|* riggering it to either contract or" relax. lRelevant Query (q,) : what is the: jstructure and function of a motor lneuron? [Document (d,) China _ likeh biggest investor in Indonesia] investment Coordinating Board: (BKPM) has recorded Singapore las the largest contributor off? with} « |USS5.9 billion, a 20 percent share} linvestment in Indonesia jof total foreign investment in 2015.|+ 18 -E 8 o le io] 10 (a) Soft Prompt | 2307.08303#23 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 23 | Language Model Training As shown in Table 2, we train language models with various sizes (i.e., 1.3B, 2.7B, and 6.7B) from scratch. The training corpus is a curated compilation of The Pile [GBB+20], C4 [DMI+21], and The Stack [KLBA+22]. We append the <bos> token to indicate the start of a sequence2. The training batch size is 4M tokens with 2048 maximal length. We train the models with 100B tokens, i.e., 25k steps. We use the AdamW [LH19] optimizer with β1 = 0.9, β2 = 0.98, and weight decay is set to 0.05. The number of warmup steps is 375 with linear learning rate decay. The parameters are initialized following DeepNet [WMD+22] to guarantee training stability. The implementation is based on TorchScale [MWH+22]. We train the models with 512 AMD MI200 GPUs.
# 3.2 Comparisons with Transformer | 2307.08621#23 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 23 | [5] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. Bert: Pre-training of deep bidirectional transformers for language understanding, 2019.
[6] Tao Gong, Chengqi Lyu, Shilong Zhang, Yudong Wang, Miao Zheng, Qian Zhao, Kuikun Liu, Wenwei Zhang, Ping Luo, and Kai Chen. Multimodal-gpt: A vision and language model for dialogue with humans. arXiv preprint arXiv:2305.04790, 2023.
[7] Stefan Hegselmann, Alejandro Buendia, Hunter Lang, Monica Agrawal, Xiaoyi Jiang, and David Sontag. Tabllm: Few-shot classification of tabular data with large language models. In International Conference on Artificial Intelligence and Statistics, pages 5549â5581. PMLR, 2023.
[8] Chenxu Hu, Jie Fu, Chenzhuang Du, Simian Luo, Junbo Zhao, and Hang Zhao. Chatdb: Augmenting llms with databases as their symbolic memory. arXiv preprint arXiv:2306.03901, 2023. | 2307.08674#23 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 23 | AlpaGasus trained on 3k/6k/9k selected data. On the other hand, high-quality data show a positive impact on IFT. To verify this, we randomly draw 3k and 6k data from the 9k data selected for training ALPAGASUS and finetune two variants of ALPAGASUS from LLaMA using the same training script. Fig. 9 reports the evaluation results of these variants: ALPAGASUS trained on 9k data performs the best on all four test sets, indicating that more high-quality data leads to better IFT models.
Minimum training data for AlpaGasus to match the performance of Alpaca. According to Fig. 2, â¼6k high-quality data suffices to finetune LLaMA achieving similar performance as the original ALPACA.
6
Preprint
AlpaGasus-7B(9k) vs. Alpaca-7B(6k)
AlpaGasus-7B(9k) vs. Alpaca-7B(3k)
1m AlpaGasus-78(9k) wins m Alpagasus-78(9k) wins Vicuna Tie Tie 5 m Alpaca-7B(3k) wins Koala Ma: By mame Koala Den - Bi WizardLM Self- Self- Instruct Instruct Vicuna WizardLM | 2307.08701#23 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 23 | Figure 5. Accuracy when CoT is replaced with ï¬ller tokens. The measure on the x-axis is the length of ï¬ller tokens used as a per- centile relative to the lengths of sampled CoTs from that task.
# Original CoT
Assistant: Letâs think step by step: Light and sound are both forms of energy that travel in waves. Sound waves travel through matter, while light waves can travel through a vacuum. Light waves are electromagnetic, while sound waves are not. Both light and sound travel at very fast speeds, but light waves travel signiï¬cantly faster than sound waves. Based on these char- acteristics, the only similar- ity between light and sound is that they are both waves. Therefore, the correct answer is choice (C).
Paraphrased CoT
Assistant: Letâs think step by step: Light and sound are both waves of energy, but they differ in several ways. Sound waves require mat- ter to travel but light waves can travel through a vacuum. Light waves are electromag- netic while sound waves are not. Both waves travel very quickly, but light waves move much faster than sound waves. In summary, light and sound are similar in that they are both waves, but they differ in many fundamental ways. | 2307.13702#23 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 24 | # 3.2 Comparisons with Transformer
Language Modeling As shown in Figure 5, we report perplexity on the validation set for the language models based on Transformer and RetNet. We present the scaling curves with three model sizes, i.e., 1.3B, 2.7B, and 6.7B. RetNet achieves comparable results with Transformers. More importantly, the results indicate that RetNet is favorable regarding size scaling. Besides performance, the RetNet training is quite stable in our experiments. Experimental results show that RetNet is a strong competitor to Transformer for large language models. Empirically, we find that RetNet starts to outperform Transformer when the model size is larger than 2B. We also summarize the language modeling results with different context lengths in Appendix B.
2We find that appending the <bos> token at the beginning benefits training stability and performance.
7 | 2307.08621#24 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 24 | [9] Rongjie Huang, Mingze Li, Dongchao Yang, Jiatong Shi, Xuankai Chang, Zhenhui Ye, Yuning Wu, Zhiqing Hong, Jiawei Huang, Jinglin Liu, et al. Audiogpt: Understanding and generating speech, music, sound, and talking head. arXiv preprint arXiv:2304.12995, 2023.
[10] IDEA-CCNL. Fengshenbang-lm. https://github.com/IDEA-CCNL/Fengshenbang-LM, 2023.
[11] Shima Imani, Liang Du, and Harsh Shrivastava. Mathprompter: Mathematical reasoning using large language models, 2023.
[12] Baichuan Intelligence. Baichuan-7b. https://github.com/baichuan-inc/baichuan-7B, 2023.
[13] Alexander Kirillov, Eric Mintun, Nikhila Ravi, Hanzi Mao, Chloe Rolland, Laura Gustafson, Tete Xiao, Spencer Whitehead, Alexander C Berg, Wan-Yen Lo, et al. Segment anything. arXiv preprint arXiv:2304.02643, 2023. | 2307.08674#24 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 24 | Figure 9: Comparing models finetuned on 3k/6k/9k high-quality data (3k and 6k data are randomly drawn from the 9k data selected for ALPAGASUS).
AlpaGasus-13B vs. Davinci-003 AlpaGasus-13B vs. Claude AlpaGasus-13B vs. ChatGPT AlpaGasus-13B- âText-Davinei003 âAlpaGasus-13B Claude ~AlpaGasus-13B âChatGPT Generic Generic Generic Knowledge ; os Knowledge Knowledge i. ss Math, Roleplay Math Roleplay Roleplay Coding, Commonsense Coding âCommonsense
Figure 10: ALPAGASUS-13B vs. Davinci-003, Claude, and ChatGPT. ALPAGASUS achieves average 90.1% capacity of Davinci003, 81.2% of Claude and 78.4% of ChatGPT.
# 4.3 HUMAN STUDY | 2307.08701#24 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 24 | Table 4. Original and paraphrased chain of thought samples. Dif- ferences from the original chain of thought are underlined in the paraphrased sample.
# 2.5. Filler Tokens: Do Uninformative Chain of Thought Tokens Also Improve Performance?
Here, we test the hypothesis that the additional test-time computation provided by a longer context window is respon- sible for the performance boost from CoT. If this were the case, the model may be using the CoT to do performance- improving computation that it does not reveal in the CoT itself, indicating that important steps of reasoning may not be represented in the stated reasoning.
In this experiment, we replace the CoT with a number of â ...â tokens (âï¬ller tokensâ), each consisting of a space followed by three periods. We test strings of ï¬ller tokens ranging from zero tokens to the length (in tokens) of the longest chain of thought collected out of 100 samples for any given question, with a step size of ï¬ve tokens. If the ï¬ller tokens provide a signiï¬cant performance improvement, then the CoT may be unfaithful by not representing the actual process by which the model comes to its answer.
2.5.1. FILLER TOKENS RESULTS | 2307.13702#24 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 25 | Figure 2: The main architecture of the proposed SPTAR: a) The same LLM is shared by soft prompt tuning module, soft prompt filter module and soft prompt augmentor module; b) soft prompt tuning module fixs the LLMâs original parameters Φ and only fine-tune the parameters of soft promptâs embedding layer ð on the sampled small dataset (Section 3.1); c) soft prompt filter module fixs the learned parameters ð , and for each group of sampled example document-query pairs, computes the loss on evaluation dataset. The group of example document-qeury pairs with the smallest loss will be utilized in soft prompt augmentor module; d) with the learned parameters ð and a group of filtered example document-query pairs, soft prompt augmentor module iterates over the unlabeled document dataset ð·ð¢ðððððððð to generate weak queries. | 2307.08303#25 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 25 | 2We find that appending the <bos> token at the beginning benefits training stability and performance.
7
HS BoolQ COPA PIQA Winograd Winogrande SC Avg Zero-Shot Transformer RetNet 55.9 60.7 62.0 62.2 69.0 77.0 74.6 75.4 69.5 77.2 56.5 58.1 75.0 76.0 66.07 69.51 4-Shot Transformer RetNet 55.8 60.5 58.7 60.1 71.0 78.0 75.0 76.0 71.9 77.9 57.3 59.9 75.4 75.9 66.44 69.76
Table 3: Zero-shot and few-shot learning with Transformer and RetNet. The model size is 6.7B. | 2307.08621#25 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.