doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2307.08674 | 25 | [14] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. Large language models are zero-shot reasoners. Advances in neural information processing systems, 35:22199â22213, 2022.
[15] LangChain. Langchain. https://blog.langchain.dev/, 2022.
[16] Juho Lee, Yoonho Lee, Jungtaek Kim, Adam Kosiorek, Seungjin Choi, and Yee Whye Teh. Set transformer: A framework for attention-based permutation-invariant neural networks. In International conference on machine learning, pages 3744â3753. PMLR, 2019.
[17] Hongxin Li, Jingran Su, Yuntao Chen, Qing Li, and Zhaoxiang Zhang. Sheetcopilot: Bring- ing software productivity to the next level through large language models. arXiv preprint arXiv:2305.19308, 2023.
11 | 2307.08674#25 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 25 | Algorithm 1 FLASHATTENTION-2 forward pass Require: Matrices Q,K, V ¢ Râ*¢ in HBM, block sizes Be, B,. 1: Divide Q into T, = [3] blocks Qi,...,Qz, of size B, x d each, and divide K, V in to T, = [a] blocks K,,..., Ky, and Vj,..., Vz., of size B. x d each. 2: Divide the output O ⬠RN*4 into T, blocks O;,..., Or, of size B, x d each, and divide the logsumexp L into T, blocks Lj,..., Lr, of size B, each. 3: for 1 <i<T, do 4; Load Q; from HBM to on-chip SRAM. 5: On chip, initialize 0 = (0)B,xa ⬠RBrxd, 6 = (0)z, ⬠RE mi = (-co)p, ⬠RF, 6: forl<j<T. do : Load K;,V; from HBM to on-chip SRAM. 8: On chip, compute si) = Qik; ⬠RBr*Be, 9: On chip, compute mi) = max(m\!~), rowmax(S\â)) e RF, pv? = | 2307.08691#25 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 25 | # 4.3 HUMAN STUDY
We further undertake human studies by enlisting three participants tasked with labeling the ques- tion/answer pairs. To be specific, we select 40 prompts from each test set, resulting in a total of 160 prompts. These are then presented to the participants alongside the corresponding responses generated by both ALPAGASUS-13B and Alpaca-13B. The final answers are determined by majority voting. There are 63/160 wins for ALPAGASUS-13B, 64/160 ties and 33/160 loses, which indicates the superiority of our ALPAGASUS. Comprehensive results on each test set and user guidelines could be found in Appendix J.
4.4 COMPARISON WITH CHATGPT/CLAUDE/DAVINCI003.
In Fig. 10, we compare ALPAGASUS with text-Davinci-003, ChatGPT, and Claude. The results show that ALPAGASUS-13B can achieve ⥠90% capacity of its teacher model, text-Davinci-003, which is used to generate the ALPACA-52k instruction data.
4.5 BENCHMARK PERFORMANCE | 2307.08701#25 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 25 | 2.5.1. FILLER TOKENS RESULTS
# 2.6. Paraphrasing: Does Rewording the Chain of Thought Change the Predicted Answer?
One way that the modelâs stated reasoning could be unfaith- ful and also not post-hoc is if information that is encoded in phrasing choices of the reasoning are responsible for the change in the answer, rather than the content of the CoT itself. This could arise due to pressure during RLHF to provide the correct answer while also providing an implicit length penalty (Branwen, 2023), or due to pressures from RLHF to disguise the content of reasoning that might be disincentivized by the preference model.
In this experiment, we have a language model paraphrase the CoT to ablate phrasing choices that could contain encoded information in the CoT. If phrasing-encoded information is a major component of the accuracy boost conveyed by CoT, we should expect that paraphrasing will reduce or eliminate it. We ensure that the model that paraphrases the reasoning does not have access to the original question, to remove any incentive to generate reasoning that produces a correct answer to that question. | 2307.13702#25 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 26 | prompt s to be optimized with size 1, by repeating "please generate query for document" until the length of s equals /s. Finally, we concatenate s with c; as one training instance t; = [s;c;] and there are Num(X) â M instances in each epoch. Assume we have an autogressive LLM denoted by pg (q;|t;). When tj is inputted into soft prompt tuning module, it is first tokenized into a list of ids z; indexed by i then the embeddings of ids are extracted and fed into the following layers to compute the hidden vectors. For concise, we assume each token in tj has one corresponding id in zj. For training instance t;, the hidden vector of ith time step is defined QM... pO hia ante hij layers in LLM. An autogressive LLM takes current z;; and its left context to computes h; as follows: as hj ji ⬠R¢ where hyi= [ and / is the number of
_ JLLMo¢ (z;,i-hj,<i) » i= JE YLLMg (zj,i.hj,<i)» ifi< ls oe (1) ifi >I, | 2307.08303#26 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 26 | Table 3: Zero-shot and few-shot learning with Transformer and RetNet. The model size is 6.7B.
Model Size Memory (GB) â Trm Trm+FlashAttn RetNet Trm Throughput (wps) â Trm+FlashAttn RetNet 1.3B 2.7B 6.7B 13B 74.8 69.6 69.0 61.4 38.8 42.1 51.4 46.3 34.5 42.0 48.0 45.9 10832.4 5186.0 2754.4 1208.9 63965.2 34990.2 16230.1 7945.1 73344.8 38921.2 17458.6 8642.2
Table 4: Training cost of Transformer (Trm), Transformer with FlashAttention (Trm+FlashAttn), and RetNet. We report memory consumption and training throughput (word per second; wps). | 2307.08621#26 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 26 | 11
[18] Jinyang Li, Binyuan Hui, Reynold Cheng, Bowen Qin, Chenhao Ma, Nan Huo, Fei Huang, Wenyu Du, Luo Si, and Yongbin Li. Graphix-t5: Mixing pre-trained transformers with graph- aware layers for text-to-sql parsing. arXiv preprint arXiv:2301.07507, 2023.
[19] Liyao Li, Haobo Wang, Liangyu Zha, Qingyi Huang, Sai Wu, Gang Chen, and Junbo Zhao. Learning a data-driven policy network for pre-training automated feature engineering. In The Eleventh International Conference on Learning Representations, 2022.
[20] Guoshan Lu, Haobo Wang, Saisai Yang, Jing Yuan, Guozheng Yang, Cheng Zang, Gang Chen, and Junbo Zhao. Catch: Collaborative feature set search for automated feature engineering. In Proceedings of the ACM Web Conference 2023, pages 1886â1896, 2023.
[21] OpenAI. Chatgpt. https://openai.com/blog/chatgpt, 2022.
[22] OpenAI. Gpt-4 technical report, 2023. | 2307.08674#26 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 26 | ⬠RBr*Be, 9: On chip, compute mi) = max(m\!~), rowmax(S\â)) e RF, pv? = exp($\? - mi?) ⬠RB xBe (pointwise), oc? = erm my?) ¢I-D + rowsum(P) eR*. 10: On chip, compute oo!â = diag(e⢠-)â)-19-D) + PY'V,. 11: end for 12: On chip, compute O; = diag(eâ¢) 10. 13: On chip, compute L; = m(T) + log(eâ¢). 14: Write O; to HBM as the i-th block of O. 15: Write L; to HBM as the i-th block of L. 16: end for 17: Return the output O and the logsumexp L. | 2307.08691#26 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 26 | 4.5 BENCHMARK PERFORMANCE
Following InstructEval (Chia et al., 2023), we also evaluate our models on benchmark datasets, i.e., MMLU (Hendrycks et al., 2020), DROP (Dua et al., 2019) Humaneval (Chen et al., 2021), BBH (Suzgun et al., 2022), to evaluate the modelsâ performance. The details of the benchmark setting can be found in Appendix B. Benchmark results of our ALPAGASUS are shown in Table 2, where higher values indicate better performance. ALPAGASUS-7B, 13B show superiority on the 3/4 datasets, which demonstrates the effectiveness of our filtering algorithm. Another interesting finding is that the models trained with our filtered data can be better on all the benchmarks than training with randomly selected data.7
7We observe similar performance gains of the 7B model on Dolly, and our 13B (3k) model consistently outperforms baselines, i.e., 13B(random-3k) and 13B(15k), on all four benchmark datasets, which are deferred to the Appendix B.
7
Preprint | 2307.08701#26 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 26 | From Fig. 5 we can see that there is no increase in accuracy observed from adding â ...â tokens to the context. In fact, for some tasks, such as TruthfulQA and OpenBookQA, the performance actually drops slightly in the longer-context setting, which may be due to this kind of sequence being out of the modelâs training distribution. These results suggest that extra test-time compute alone is not used by models to perform helpful but unstated reasoning.
We have the model paraphrase the same subsequences of the reasoning sample as were used in early answering ([x1] alone, [x1, x2], [x1, x2, x3], etc.) without access to the orig- inal question. The prompt we use can be found in Appendix A. We then insert the paraphrased chain of thought back into the original prompt and continue to sample until the end of the reasoning sample, then prompting for a ï¬nal answer
6
Measuring Faithfulness in Chain-of-Thought Reasoning | 2307.13702#26 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 27 | completing the soft prompt training, with the learned parameters ð , we try to select the best group of document-query pair from ðð ð¡ðððð as example document-query pairs in soft prompt augmentor. For ð = 2, there are 1475 (50 â 59/2) groups of example pairs which are too many to evaluate them all efficiently. To reduce the compu- tation complexity, we random sample ð groups of example pairs ð¡ðððð to evaluate them on the evaluation dataset ðð from ðð ðð£ðð and the group of example pairs with the best evaluation metric will be chosen as the example pairs in soft prompt augmentor. As shown in Figure 2 (c), the only difference between soft prompt tuning and soft prompt filter is the dataset where the ð ð comes from. Suppose we sampled ð groups of document-query pairs each of which has ð document-query pairs (ðð, ðð)ð | 2307.08303#27 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 27 | Zero-Shot and Few-Shot Evaluation on Downstream Tasks We also compare the language models on a wide range of downstream tasks. We evaluate zero-shot and 4-shot learning with the 6.7B models. As shown in Table 3, the datasets include HellaSwag (HS) [ZHB+19], BoolQ [CLC+19], COPA [WPN+19], PIQA [BZB+20], Winograd, Winogrande [LDM12], and Sto- ryCloze (SC) [MRL+17]. The accuracy numbers are consistent with language modeling perplexity presented in Figure 5. RetNet achieves comparable performance with Transformer on zero-shot and in-context learning settings.
# 3.3 Training Cost | 2307.08621#27 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 27 | [22] OpenAI. Gpt-4 technical report, 2023.
[23] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learning transferable visual models from natural language supervision. In International conference on machine learning, pages 8748â8763. PMLR, 2021.
[24] Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, et al. Improving language understanding by generative pre-training. 2018.
[25] Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, et al. Language models are unsupervised multitask learners. OpenAI blog, 1(8):9, 2019.
[26] Pengzhen Ren, Yun Xiao, Xiaojun Chang, Po-Yao Huang, Zhihui Li, Brij B. Gupta, Xiaojiang Chen, and Xin Wang. A survey of deep active learning, 2021. | 2307.08674#27 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 27 | Causal masking. One common use case of attention is in auto-regressive language modeling, where we need to apply a causal mask to the attention matrix S (i.e., any entry Sð ð with ð > ð is set to ââ).
1. As FlashAttention and FlashAttention-2 already operate by blocks, for any blocks where all the column indices are more than the row indices (approximately half of the blocks for large sequence length), we can skip the computation of that block. This leads to around 1.7-1.8Ã speedup compared to attention without the causal mask.
2. We do not need to apply the causal mask for blocks whose row indices are guaranteed to be strictly less than the column indices. This means that for each row, we only need apply causal mask to 1 block (assuming square block).
6
Correctness, runtime, and memory requirement. As with FLASHATTENTION, Algorithm [I] returns the correct output O = softmax(QKâ¢)V (with no approximation), using O(N?d) FLOPs and requires O(N) additional memory beyond inputs and output (to store the logsumexp L). The proof is almost the same as the proof of Dao et al. [5] Theorem 1], so we omit it here.
# 3.1.2 Backward pass | 2307.08691#27 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 27 | 7
Preprint
Datasets 7B(9k-random) 7B(9k) 7B(52k) 13B(9k-random) 13B(9k) 13B(52k) BBH Drop Humaneval MMLU 31.89 25.88 11.59 36.93 33.76 26.03 12.20 38.78 33.01 25.87 11.69 40.86 38.60 33.40 15.24 44.98 38.92 34.4 15.86 46.12 38.67 33.84 15.74 47.89
Table 2: The benchmark results of filtering the Alpaca dataset.
# 5 HUMAN-WRITTEN INSTRUCTION SET FILTERING
In addition to filtering machine-generated datasets, our approach is capable of filtering human-written datasets. Specifically, we investigate the Databricks-dolly-15k dataset (Dolly, 2023), a seminal collec- tion of 15,000 high-quality human-generated prompt/response pairs. Notably, this unparalleled dataset is a product of the collective efforts of more than 5,000 Databricks contributors and the included prompts and responses are more than just simple text; they embody a comprehensive spectrum of human cognition, covering activities from inventive brainstorming to succinct summarization. | 2307.08701#27 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 27 | 6
Measuring Faithfulness in Chain-of-Thought Reasoning
3-Step CoTs 4-Step CoTs 100 80 60 > 40 *) £ fe g 100 < 80 60 40 20 40 60 80 100 20 40 60 80 100 % of Reasoning Sample Paraphrased âb AQuA â- ARC (Challenge) â|â Truthfulga â1â LogiQa â- MMLU â+ ARC (Easy) Non-paraphrased ât+ OpenBookQA â+~ HellaSwag ---- CoT accuracy on this task
3 8 80 60 40 bE AQuA â} ARC (Challenge) 20) â+â TruthfulQa â Logiga âb MMU âbK ARC (Easy) âb OpenBookQA â+ HellaSwag 10° 102° 102 Model Size (Parameters) % Same Answer With and Without CoT °
Figure 7. Chance of giving the same answer with and without CoT reasoning, at a variety of model sizes.
Figure 6. Accuracy with and without paraphrasing.
as before.3 An example can be found in Table 4. We mea- sure the modelâs accuracy after standard and paraphrased CoT. If phrasing-encoded information is responsible for the accuracy boost conveyed by CoT, we should expect to see degraded performance under paraphrased reasoning relative to standard CoT. | 2307.13702#27 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 28 | ð groups of document-query pairs each of which has ð document-query pairs (ðð, ðð)ð ðð£ðð has ðð¢ð(ð ) document-query pairs and for each pair (ð ð , ð ð ), similar to Table 1, we concatenate ð ð with the (ðð, ðð)ð ð=1 as ð ð . Then, ð ð is concatenated with the initialized prompt ð as ð¡ ð = [ð , ð ð ]. The evaluation metric is the same as the loss function ð¿ (Equation 3). We study the effectiveness of soft prompt filter in Section 5.2.3. | 2307.08303#28 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 28 | # 3.3 Training Cost
As shown in Table 4, we compare the training speed and memory consumption of Transformer and RetNet, where the training sequence length is 8192. We also compare with FlashAttention [DFE+22], which improves speed and reduces GPU memory IO by recomputation and kernel fusion. In compari- son, we implement RetNet using vanilla PyTorch code, and leave kernel fusion or FlashAttention-like acceleration for future work. We use chunkwise recurrent representation of retention as described in Equation (7). The chunk size is set to 512. We evaluate the results with eight Nvidia A100-80GB GPUs, because FlashAttention is highly optimized for A100. Tensor parallelism is enabled for 6.7B and 13B models.
Experimental results show that RetNet is more memory-efficient and has higher throughput than Transformers during training. Even compared with FlashAttention, RetNet is still competitive in terms of speed and memory cost. Moreover, without relying on specific kernels, it is easy to train RetNet on other platforms efficiently. For example, we train the RetNet models on an AMD MI200 cluster with decent throughput. It is notable that RetNet has the potential to further reduce cost via advanced implementation, such as kernel fusion.
# 3.4 Inference Cost | 2307.08621#28 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 28 | [27] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timo- thée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971, 2023.
[28] Peking University. Chatexcel. https://chatexcel.com/, 2023.
[29] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Åukasz Kaiser, and Illia Polosukhin. Attention is all you need. Advances in neural information processing systems, 30, 2017.
[30] Jianguo Wang, Xiaomeng Yi, Rentong Guo, Hai Jin, Peng Xu, Shengjun Li, Xiangyu Wang, Xiangzhou Guo, Chengming Li, Xiaohai Xu, et al. Milvus: A purpose-built vector data management system. In Proceedings of the 2021 International Conference on Management of Data, pages 2614â2627, 2021. | 2307.08674#28 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 28 | 1 Dolly-7B(3k) wins ⢠Dolly-7B(3k) wins Vicuna Vicuna Tie uTie 1m Dolly-7B(15k) wins 1m Dolly-7B(3k-random) wins WizardL WizardL Ee: a Self- Self- Instruct Instruct
Figure 11: Comparing models finetuned on filtered 3k data and original Dolly 15k data.
We also applied a threshold of 4.5 for data filtration, resulting in a filtered dataset of 2,996 samples. (Score distribution can be found in Appendix B) A comparison between the 7B/13B LLaMA trained on our filtered 3k dataset and the one trained on the entire Dolly 15k dataset is illustrated in Fig. 11 and Fig. 22. Our evaluation suggests that the model trained on our filtered data exhibits superior performance, thus underscoring the efficacy of our filtering method on human-composed datasets. Comprehensive details regarding training hyperparameters are provided in the Appendix D.8
# 6 CASE STUDY & ANALYSIS | 2307.08701#28 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 28 | 2.6.1. PARAPHRASING RESULTS
Overall, the paraphrased accuracy closely matches the ac- curacy of the original chain of thought for almost all tasks at almost all numbers of ï¬ller tokens, as shown in Fig. 6. This result indicates that the particular phrasing of the rea- soning is unlikely to encode information responsible for the accuracy gains of CoT.
able to conï¬dently predict the answer without relying on CoT. In this case, CoT may not have much inï¬uence on the modelâs ï¬nal answer. As a result, we hypothesize that, for reasoning faithfulness, larger models may behave worse than smaller ones (âinverse scalingâ; McKenzie et al., 2023); smaller models may, for some tasks, beneï¬t more from CoT, potentially leading them to rely more on CoT. | 2307.13702#28 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 29 | The training objective function is:
âï¸
max log po,4 (4j | tj) = max » log pog (zi, | hi,<i) (2) i¢idxg;
(2)
where idxg, represent the indexes of d;âs ids and pg ¢ (z),i | Ay,<i) represent the possibility of next token the id of which is z;,i. We utilize the negative log-likelihood and perplexity as loss L, which is defined as:
ð¿ = â log ðð,ð (ð ð | ð¡ ð ) + ðð¥ðâ log ðð,ð (ð ð | ð¡ ð ) (3)
3.3 Soft Prompt Filter During the training process of the soft prompt module, we observe that the choice of example document-query pairs (ðð, ðð)ð ð=1 sig- nificantly impacts the quality of text generation. Therefore, upon | 2307.08303#29 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 29 | # 3.4 Inference Cost
As shown in Figure 6, we compare memory cost, throughput, and latency of Transformer and RetNet during inference. Transformers reuse KV caches of previously decoded tokens. RetNet uses the recurrent representation as described in Equation (6). We evaluate the 6.7B model on the A100-80GB GPU in our experiments. Figure 6 shows that RetNet outperforms Transformer in terms of inference cost.
Memory As shown in Figure 6a, the memory cost of Transformer increases linearly due to KV caches. In contrast, the memory consumption of RetNet remains consistent even for long sequences,
8
(a) GPU memory cost of Transformer and RetNet. (b) Throughput of Transformer and RetNet.
wo oO
(c) Inference latency with different batch sizes.
Figure 6: Inference cost of Transformer and RetNet with a model size of 6.7B. RetNet outperforms Transformers in terms of memory consumption, throughput, and latency.
requiring much less GPU memory to host RetNet. The additional memory consumption of RetNet is almost negligible (i.e., about 3%) while the model weights occupy 97%.
Throughput As presented in Figure 6b, the throughput of Transformer drops along with the decoding length increases. In comparison, RetNet has higher and length-invariant throughput during decoding, by utilizing the recurrent representation of retention. | 2307.08621#29 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 29 | [31] Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A Smith, Daniel Khashabi, and Hannaneh Hajishirzi. Self-instruct: Aligning language model with self generated instruc- tions. arXiv preprint arXiv:2212.10560, 2022.
[32] Yuxuan Wang, RJ Skerry-Ryan, Daisy Stanton, Yonghui Wu, Ron J. Weiss, Navdeep Jaitly, Zongheng Yang, Ying Xiao, Zhifeng Chen, Samy Bengio, Quoc Le, Yannis Agiomyrgiannakis, Rob Clark, and Rif A. Saurous. Tacotron: Towards end-to-end speech synthesis, 2017.
[33] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems, 35:24824â24837, 2022. | 2307.08674#29 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 29 | exponentials in the softmax. We include the backward pass description in Algori hm [2] for completeness. Algorithm 2 FLAsHATTENTION-2 Backward Pass Require: Matrices Q, K, V,0,dO ⬠RN*4 in HBM, vector L ⬠RY in HBM, block sizes Be, B,. 1: Divide Q into T, = [| blocks Qi,...,Qz, of size B, x d each, and divide K, V in to T, = [3] blocks K,,..., Ky, and Vj,..., Vz., of size B. x d each. 2: Divide O into 7, blocks 0;,...,O7, of size B, x d each, divide dO into T, blocks dO;,...,dO7, of size B, x d each, and divide L into T, blocks L;,..., Lr, of size B- each. 3: Initialize dQ = (0)yxa in HBM and divide it into 7, blocks dQ,,...,dQ;, of size B, x d each. Divide dK, dV ⬠RNâ in to T, blocks dKj,.. .,dKy, and dV;,...,dVr., of size B. x d each. 4: | 2307.08691#29 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 29 | # 6 CASE STUDY & ANALYSIS
Alpaca-9k-random vs. AlpaGasus AlpaGasus vs. Alpaca-52k Instruction Find the area of a circle with radius 5 cm. Instruction Implement a regular expression in Python to validate an email address. Alpaca-9k-) The area of a circle with radius 5 em is random 50.26548245743669. Alpaca-52k â The regular expression in Python to validate an email address is:
\s(https?://[*\s}+)\s
The area of a circle with radius 5 cm is 78.54 square AlpaGasus : centimeters. AlpaGasus â *[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\,[a-zA-Z] {2,3 }8
Figure 12: Case study on 13B models of ALPAGASUS and ALPACA. Left: Math capability comparison based on WizardLM test set. Right: Coding skill comparison based on Vicuna test set. | 2307.08701#29 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 29 | To explore this hypothesis, we show what percentage of the time the answer changes with vs. without CoT, a metric that intuitively captures how much the model relies on the CoT to predict answers. This metric is equivalent to our early answering experiment when using 0% of the CoT speciï¬cally. This metric is highly predictive of overall early answering and adding mistakes results, e.g., the ordering of tasks by AOC in Table 2. We thus use this metric in lieu of running the full set of early answering and adding mistakes experiments for computational reasons.
# 3. Does Model Size Affect CoT Faithfulness?
In this section, we use the series of LMs of varying sizes from Ganguli et al. (2023). The models are pretrained, decoder-only transformer models ï¬netuned to be helpful dialog assistants with RLHF, similar to the model in §2. | 2307.13702#29 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 30 | 3.4 Soft Prompt Augmentor As shown in Figure 2 (d), with the learned parameters ð and the filtered group of example document-query pairs, soft prompt aug- mentor generates a weak query for an unlabeled document ð ð sam- pled from ð·ð¢ðððððððð . In practice, for each dataset, we create two weak datasets: a) ðððððð . 100ð¾ unlabled documents are sampled from ð·ð¢ðððððððð to generate their weak queries. If the number of unlabeled documents in ð·ð¢ðððððððð is smaller than 100ð¾, all the un- labeled documents are utilized to generate weak queries; b) ðð ðððð . Sample 5000 document-query pairs from | 2307.08303#30 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 30 | Latency Latency is an important metric in deployment, which greatly affects user experience. We report decoding latency in Figure 6c. Experimental results show that increasing batch size renders Transformerâs latency larger. Moreover, the latency of Transformers grows faster with longer input. In order to make latency acceptable, we have to restrict the batch size, which harms the overall inference throughput of Transformers. By contrast, RetNetâs decoding latency outperforms Transformers and keeps almost the same across different batch sizes and input lengths.
# 3.5 Comparison with Transformer Variants
Apart from Transformer, we compare RetNet with various efficient Transformer variants, including Linear Transformer [KVPF20], RWKV [PAA+23], H3 [DFS+22], and Hyena [PMN+23]. All models have 200M parameters with 16 layers and a hidden dimension of 1024. For H3, we set the head dimension as 8. For RWKV, we use the TimeMix module to substitute self-attention layers while keeping FFN layers consistent with other models for fair comparisons. We train the models with 10k steps with a batch size of 0.5M tokens. Most hyperparameters and training corpora are kept the same as in Section 3.1. | 2307.08621#30 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 30 | [34] Chao Ye, Guoshan Lu, Haobo Wang, Liyao Li, Sai Wu, Gang Chen, and Junbo Zhao. Ct- bert: Learning better tabular representations through cross-table pre-training. arXiv preprint arXiv:2307.04308, 2023.
[35] Wentao Ye, Mingfeng Ou, Tianyi Li, Xuetao Ma, Yifan Yanggong, Sai Wu, Jie Fu, Gang Chen, Junbo Zhao, et al. Assessing hidden risks of llms: An empirical study on robustness, consistency, and credibility. arXiv preprint arXiv:2305.10235, 2023.
12
[36] Aohan Zeng, Xiao Liu, Zhengxiao Du, Zihan Wang, Hanyu Lai, Ming Ding, Zhuoyi Yang, Yifan Xu, Wendi Zheng, Xiao Xia, et al. Glm-130b: An open bilingual pre-trained model. In The Eleventh International Conference on Learning Representations, 2022. | 2307.08674#30 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08691 | 30 | dV ⬠RNâ in to T, blocks dKj,.. .,dKy, and dV;,...,dVr., of size B. x d each. 4: Compute D = rowsum(dO o O) ⬠R¢ (pointwise multiply), write D to HBM and divide it into T, blocks D,,...,Dr, of size B, each. for 1 <j<T. do 6: Load Kj, V; from HBM to on-chip SRAM. 7: Initialize dK; = (0)g.xa,dVj = (0)B.xa on SRAM. on 8: for 1<i<T, do 9: Load Q;, 0;,d0;, dQ;, L;, D; from HBM to on-chip SRAM. 10: On chip, compute si) = Qik; ⬠RB xBe, 11: On chip, compute Pp? = exp(S;; â L;) ⬠RB *Be, 12: On chip, compute dV; â dV; + (P\)Td0; ⬠RBexd, 13: On chip, compute ap? = d0;V; ⬠RBrxBe 14: On chip, compute as?) = pv? ° (dP â Dj) ⬠| 2307.08691#30 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 30 | Fig. 12 shows two case studies of 13B models trained on 52k data (ALPACA), 9k selected data (ALPAGASUS), and 9k randomly selected data (ALPACA-9k-random). The left case study focuses on the math capability, where ALPAGASUS can produce a correct answer while ALPACA-9k-random cannot. As the judge, GPT-4 rates the answer of ALPAGASUS by a score of 10.0 while ALPACA- 9k-random receives a score of 2.0. The right case study focuses on coding skills, ALPACA-52k
8The result in Appendix A.4 (GPT4LLM dataset) shows the potential of applying our ChatGPT-based response quality evaluator to filter GPT-4âs responses, which is considered as the most powerful model.
8
# Preprint
cannot follow the instructions but produces a regular expression to validate the website address while ALPAGASUS directly generates the correct code. | 2307.08701#30 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 30 | Our results so far indicate that reasoning faithfulness is signiï¬cantly lower on some tasks. For those tasks, it is natural to ask whether any models provide faithful reason- ing on the tasks. If so, we would be able to choose the model we used to solve the task, such that it was one that produced faithful reasoning on that task, if faithful expla- nations were important (e.g., for high-stakes settings like medical decision-making). One potential cause for unfaith- ful reasoning in a model is that a model may already be
3We chose this approach over paraphrasing the entire chain of thought; the model almost always states the ï¬nal answer it has arrived at in the ï¬nal steps of the CoT, so paraphrasing the entire CoT would trivially lead the model to produce the same answer.
# 3.1. Standard Tasks | 2307.13702#30 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 31 | Table 5 reports the perplexity numbers on the in-domain validation set and other out-of-domain corpora, e.g., Project Gutenberg 2019-2022 (PG22) [SDP+22], QMSum [ZYY+21], GovRe9
Method In-Domain PG22 QMSum GovReport SummScreen RWKV H3 Hyena Linear Transformer RetNet 30.92 29.97 32.08 40.24 26.05 51.41 49.17 52.75 63.86 45.27 28.17 24.29 28.18 28.45 21.33 19.80 19.19 20.55 25.33 16.52 25.78 25.11 26.51 32.02 22.48
Table 5: Perplexity results on language modeling. RetNet outperforms other architectures on both the in-domain evaluation set and various out-of-domain corpora. | 2307.08621#31 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08674 | 31 | [37] Susan Zhang, Stephen Roller, Naman Goyal, Mikel Artetxe, Moya Chen, Shuohui Chen, Christopher Dewan, Mona Diab, Xian Li, Xi Victoria Lin, et al. Opt: Open pre-trained transformer language models. arXiv preprint arXiv:2205.01068, 2022.
[38] Wenqi Zhang, Yongliang Shen, Weiming Lu, and Yueting Zhuang. Data-copilot: Bridging billions of data and humans with autonomous workflow. arXiv preprint arXiv:2306.07209, 2023.
[39] Victor Zhong, Caiming Xiong, and Richard Socher. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103, 2017.
13 | 2307.08674#31 | TableGPT: Towards Unifying Tables, Nature Language and Commands into One GPT | Tables are prevalent in real-world databases, requiring significant time and
effort for humans to analyze and manipulate. The advancements in large language
models (LLMs) have made it possible to interact with tables using natural
language input, bringing this capability closer to reality. In this paper, we
present TableGPT, a unified fine-tuned framework that enables LLMs to
understand and operate on tables using external functional commands. It
introduces the capability to seamlessly interact with tables, enabling a wide
range of functionalities such as question answering, data manipulation (e.g.,
insert, delete, query, and modify operations), data visualization, analysis
report generation, and automated prediction. TableGPT aims to provide
convenience and accessibility to users by empowering them to effortlessly
leverage tabular data. At the core of TableGPT lies the novel concept of global
tabular representations, which empowers LLMs to gain a comprehensive
understanding of the entire table beyond meta-information. By jointly training
LLMs on both table and text modalities, TableGPT achieves a deep understanding
of tabular data and the ability to perform complex operations on tables through
chain-of-command instructions. Importantly, TableGPT offers the advantage of
being a self-contained system rather than relying on external API interfaces.
Moreover, it supports efficient data process flow, query rejection (when
appropriate) and private deployment, enabling faster domain data fine-tuning
and ensuring data privacy, which enhances the framework's adaptability to
specific use cases. | http://arxiv.org/pdf/2307.08674 | Liangyu Zha, Junlin Zhou, Liyao Li, Rui Wang, Qingyi Huang, Saisai Yang, Jing Yuan, Changbao Su, Xiang Li, Aofeng Su, Tao Zhang, Chen Zhou, Kaizhe Shou, Miao Wang, Wufang Zhu, Guoshan Lu, Chao Ye, Yali Ye, Wentao Ye, Yiming Zhang, Xinglong Deng, Jie Xu, Haobo Wang, Gang Chen, Junbo Zhao | cs.AI, cs.LG | Technical Report | null | cs.AI | 20230717 | 20230807 | [
{
"id": "2302.13971"
},
{
"id": "2304.02643"
},
{
"id": "2305.10235"
},
{
"id": "2205.01068"
},
{
"id": "2307.04308"
},
{
"id": "2306.03901"
},
{
"id": "1709.00103"
},
{
"id": "2304.12995"
},
{
"id": "2305.04790"
},
{
"id": "2305.19308"
},
{
"id": "2306.07209"
},
{
"id": "2301.07507"
},
{
"id": "2304.10453"
},
{
"id": "2212.10560"
}
] |
2307.08701 | 31 | 8
# Preprint
cannot follow the instructions but produces a regular expression to validate the website address while ALPAGASUS directly generates the correct code.
We also conduct a fine-grained evaluation of ALPAGASUS on each skill/category in the WizardLM and Vicuna test sets, whose samples are split into a list of skill sets/categories and thus facilitate detailed analyses of the capabilities achieved by IFT (Appendix H). We compare two 7B models on the WizardLM test set and report the results in Fig. 26. Our ALPAGASUS achieves better or equally good performance than ALPACA on 22/29 skills but does not show advantages on the remaining 7 skills such as coding (e.g., code generation). To investigate the reasons, we notice that the coding categories include âpythonâ, âJavaâ, âC++â, and âC#â, which indicate that we can allocate training samples regarding coding skills based on these related keywords (Appendix E). We find that our data selection/filtering, without specifying the proportions of skill categories, leads to a much higher filtering ratio of coding-related data 718â85 718 = 88.16% than the average filtering ratio 52002â9229 = 82.25%. Hence, the resulting coding skill is weaker than other skills. This indicates the importance of keeping the training data diverse and balanced across different categories in IFT. | 2307.08701#31 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 31 | # 3.1. Standard Tasks
Fig. 7 shows the results on the tasks from §2. For 7/8 tasks, the 13B parameter model changes its answer after CoT more often than the 175B model does. In fact, for most tasks, faithfulness gets monotonically worse from 13B to 175B parameter models, indicating inverse scaling in model size for reasoning faithfulness on these tasks. For six of the eight tasks (all but AQuA and LogiQA), the most faithful reasoning comes from the 13B model; on these tasks, the 13B model shows more faithful reasoning even than smaller models. We suspect that smaller models are not yet capable
7
Measuring Faithfulness in Chain-of-Thought Reasoning
# Addition Task Samples
Human: Question: What is the solution to the following? 264 + 964 =
Assistant: Letâs think step by step: 264 + 964 = = 264 + (900 + 64) = = (264 + 900) + 64 = = 1164 + 64 = = 1228 Therefore, 264 + 964 = 1228.
Human: Given the above, what is the most likely answer? Give your answer as an integer enclosed within <answer></answer>.
Assistant: <answer>1228</answer>
Human: Question: What is the solution to the following? 95 + 86 + 98 + 42 + 16 + 63 + 58 + 34 = | 2307.13702#31 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 32 | 4
Task Passage Retrieval Question Answering Domain Misc. Finance Dataset MS MARCO [20] FiQA-2018 [17] Train #Pairs 532,761 14,166 Eval Test Avg. Word Lengths #Query #Query #Corpus Avg. D/Q Query Document â 500 6,980 648 8,841,823 57,638 1.1 2.6 5.96 10.77 55.98 132.32
Table 2: Statistics of datasets in BEIR benchmark. Avg. D/Q indicates the average number of relevant documents per query.
embeddings to capture domain-specific information but also bene- fits from the supplementary context provided by the best example document-query pairs. | 2307.08303#32 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 32 | Table 5: Perplexity results on language modeling. RetNet outperforms other architectures on both the in-domain evaluation set and various out-of-domain corpora.
Method In-Domain PG22 QMSum GovReport RetNet â swish gate â GroupNorm â γ decay â multi-scale decay Reduce head dimension 26.05 27.84 27.54 27.86 27.02 27.68 45.27 49.44 46.95 47.85 47.18 47.72 21.33 22.52 22.61 21.99 22.08 23.09 16.52 17.45 17.59 17.49 17.17 17.46 22.48 23.72 23.73 23.70 23.38 23.41
Table 6: Ablation results on in-domain and out-of-domain corpora.
port [HCP+21], SummScreen [CCWG21, SSI+22]. Overall, RetNet outperforms previous methods across different datasets. RetNet not only achieves better evaluation results on the in-domain corpus but also obtains lower perplexity on several out-of-domain datasets. The favorable performance makes RetNet a strong successor to Transformer, besides the benefits of significant cost reduction (Sections 3.3 and 3.4). | 2307.08621#32 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 32 | Multi-query attention and grouped-query attention. Multi-query attention (MQA) [15] and grouped- query attention (GQA) [1] are variants of attention where multiple heads of query attend to the same head of key and value, in order to reduce the size of KV cache during inference. Instead of having to duplicate the key and value heads for the computation, we implicitly manipulate the indices into the head to perform the same computation. In the backward pass, we need to sum the gradients dK and dV across diï¬erent heads that were implicitly duplicated.
# 3.2 Parallelism
The ï¬rst version of FlashAttention parallelizes over batch size and number of heads. We use 1 thread block to process one attention head, and there are overall batch size · number of heads thread blocks. Each thread block is scheduled to run on a streaming multiprocessor (SM), and there are 108 of these SMs on
7
an A100 GPU for example. This scheduling is eï¬cient when this number is large (say ⥠80), since we can eï¬ectively use almost all of the compute resources on the GPU. | 2307.08691#32 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 32 | # 7 COST SAVING
We compare the training cost of ALPAGASUS and ALPACA in terms of the estimated expenses for the required computation on AWS. Notably, the training time is reduced from 80m to 14m for the 7B model and 5.5h to 1h for the 13B model. Such training time reduction not only substantially enhances model iteration speed, but also reduces the cost from $27.31 to $4.78 for the 7B model and $225.28 to $40.969 for the 13B model. Itâs noteworthy that instruction-tuning 65B LLaMA models require a greater number of GPUs and an extended training duration. Consequently, as the model size scales up, our data selection method yields progressively pronounced cost savings.
# 8 RELATED WORK | 2307.08701#32 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 32 | Assistant: <answer>1228</answer>
Human: Question: What is the solution to the following? 95 + 86 + 98 + 42 + 16 + 63 + 58 + 34 =
Assistant: Letâs think step by step: 95 + 86 + 98 + 42 + 16 + 63 + 58 + 34 = (95 + 86) + (98 + 42) + (16 + 63) + (58 + 34) = // add in pairs 181 + 140 + 79 + 92 = // add again 492 So the solution is 492.
Two Digit Addition 60 40 20 Three Digit Addition 60 40 20 % Same Answer With and Without CoT 101° 1011 Model Size (Parameters) â+ 2 operands ââ 8 operands â+ 4operands -â 16 operands â Baseline
Human: Given the above, what is the most likely answer? Give your answer as an integer enclosed within <answer></answer>.
Figure 8. Chance of giving the same answer with and without CoT reasoning on synthetic addition tasks, when varying model size.
Assistant: <answer>492</answer>
Table 5. Samples from the 175B model on addition task questions. Top: Sample from a two-operand, three-digit problem. Bottom: Sample from an eight-operand, two-digit problem. | 2307.13702#32 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 33 | embeddings to capture domain-specific information but also bene- fits from the supplementary context provided by the best example document-query pairs.
3.5 Weak Data Filter InPars [2] and PROPAGATE [6] demonstrated that it is necessary to clean the weak document-query pairs. We selected the same method as InPars [2] to clean the weak data. After we obtain generated weak document-query pairs (Section 3.4), we filter them by BM25: For each weak query, we run BM25 to retrieve the top ð documents from the corpus ð¶. We will drop the weak document-query pair if its paired document is not in the retrieved top ð documents. For datasets MS MARCO and FiQA-2018, we tried top ð â (10, 30, 50, 70). For BM25CE, we re-ranked top ð â (5, 10, 30, 50, 70) documents retrieved by BM25. For all the models, we report the best NDCG@10 score along with other metrics in Table 6. | 2307.08303#33 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 33 | In addition, we discuss the training and inference efficiency of the compared methods. Let d denote the hidden dimension, and n the sequence length. For training, RWKVâs token-mixing complexity is O(dn) while Hyenaâs is O(dn log n) with Fast Fourier Transform acceleration. The above two methods reduce training FLOPS via employing element-wise operators to trade-off modeling capacity. In comparison with retention, the chunk-wise recurrent representation is O(dn(b + h)), where b is the chunk size, h is the head dimension, and we usually set b = 512, h = 256. For either large model size (i.e., larger d) or sequence length, the additional b + h has negligible effects. So the RetNet training is quite efficient without sacrificing the modeling performance. For inference, among the compared efficient architectures, Hyena has the same complexity (i.e., O(n) per step) as Transformer while the others can perform O(1) decoding.
# 3.6 Ablation Studies
We ablate various design choices of RetNet and report the language modeling results in Table 6. The evaluation settings and metrics are the same as in Section 3.5. | 2307.08621#33 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 33 | In the case of long sequences (which usually means small batch sizes or small number of heads), to make better use of the multiprocessors on the GPU, we now additionally parallelize over the sequence length dimension. This results in signiï¬cant speedup for this regime.
Forward pass. We see that the outer loop (over sequence length) is embarrassingly parallel, and we schedule them on diï¬erent thread blocks that do not need to communicate with each other. We also parallelize over the batch dimension and number of heads dimension, as done in FlashAttention. The increased parallelism over sequence length helps improve occupancy (fraction of GPU resources being used) when the batch size and number of heads are small, leading to speedup in this case.
These ideas of swapping the order of the loop (outer loop over row blocks and inner loop over column blocks, instead of the other way round in the original FlashAttention paper), as well as parallelizing over the sequence length dimension were ï¬rst suggested and implemented by Phil Tillet in the Triton [17] implementation.3 | 2307.08691#33 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 33 | Open-sourced Instruction-following models. Instruction-tuning datasets can be gathered in two ways. A number of studies (Köpf et al., 2023; Dolly, 2023; Zhou et al., 2023) utilize crowdsourcing to produce human-generated pairs of instructions and responses. This approach, while effective, can be laborious and costly. Alternatively, ALPACA (Taori et al., 2023) opens the door to create machine-generated IFT sets from the distillation of the âteacherâ LLM, i.e., Text-Davinci-003. Peng et al. (2023) keep the instructions from ALPACA intact but using GPT-4 as the âteacherâ LLM, which enhances model on 3H (Helpfulness, Honesty and Harmlessness) (Askell et al., 2021) alignment criteria. Vicuna (Chiang et al., 2023) is the first to adopt ShareGPT (ShareGPT, 2023) data, which is the realistic dialogue data chatting with ChatGPT shared by users. Xu et al. (2023) and Luo et al. (2023) evolve the original Alpaca instruction set and obtain more complex | 2307.08701#33 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 33 | enough to use CoT, as suggested by Appendix C which shows that <13B models obtain little to no gain in accuracy on these tasks with CoT. Overall, our results suggest that certain tasks do not inherently lead to unfaithful reasoning; instead, only models of a certain capability level (but no higher) on a task seem to produce faithful CoT.
# 3.2. Addition Tasks
To validate the above conclusion, we perform the same evaluation on a set of synthetic addition tasks where we can directly control for task difï¬culty. Each addition task is constituted of problems with 2, 4, 8, or 16 operands, where each operand is either two or three digits in length. The modelâs answer is given as a free response, in contrast to the multiple choice tasks used previously. 4 Prompts and samples are in Table 5.
From Fig. 8 we see that this measure of post-hoc reason4As shown in Table 5, we use XML tags to elicit the ï¬nal free- response answer after the chain of thought reasoning. Our two smallest models (810M and 1.6B parameters) do not consistently provide an integer answer when prompted this way, so we exclude them from the results. | 2307.13702#33 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 34 | a) BM25 is evaluated on the original testing split ð·ð¡ðð ð¡ ; b) W/O Aug models are trained on datasets ð50 ðð£ðð utilized to fine- tune the soft prompt; c) InPars [2] models are trained on ð50 ð¡ðððð and ð100 ðð£ðð plus ð¹ððððððð (ðððððð filtered by ð Section 3.4) generated by human-written prompts. d) SPTARâs soft prompt tuing module (SPTAR-Tuning) is trained on ð50 ðð£ðð ; SP- TARâs DR models (SPTAR-DR) are trained on ð50 ðð£ðð plus ð¹ððððððð | 2307.08303#34 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 34 | We ablate various design choices of RetNet and report the language modeling results in Table 6. The evaluation settings and metrics are the same as in Section 3.5.
Architecture We ablate the swish gate and GroupNorm as described in Equation (8). Table 6 shows that the above two components improve the final performance. Firstly, the gating module is essential for enhancing non-linearity and improving model capability. Notice that we use the same parameter allocation as Transformers after removing the gate. Secondly, group normalization in retention balances the variances of multi-head outputs, which improves training stability and language modeling results.
Multi-Scale Decay Equation (8) shows that we use different γ as the decay rates for the retention heads. In the ablation studies, we examine removing γ decay (i.e., ââ γ decayâ) and applying the same decay rate across heads (i.e., ââ multi-scale decayâ). Specifically, ablating γ decay is equivalent to γ = 1. In the second setting, we set γ = 127/128 for all heads. Table 6 indicates that both the decay mechanism and using multiple decay rates can improve the language modeling performance.
Head Dimension From the recurrent perspective of Equation (1), the head dimension implies the memory capacity of hidden states. In the ablation study, we reduce the default head dimension from
10 | 2307.08621#34 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 34 | Backward pass. Notice that the only shared computation between diï¬erent column blocks is in update dQ in Algorithm 2, where we need to load dQð from HBM to SRAM, then on chip, update dQð â dQð + dS( ð) ð K ð , and write back to HBM. We thus parallelize over the sequence length dimension as well, and schedule 1 thread block for each column block of the backward pass. We use atomic adds to communicate between diï¬erent thread blocks to update dQ.
We describe the parallelization scheme in Fig. 2.
Forward pass Backward pass Worker 1 Worker 2 Worker 3 Worker 4 Worker 5 yntyy yb 4 On, On, âOn, On, âOr 49, iS 49, te, 49 Yes 72°38 "9 âS
Figure 2: In the forward pass (left), we parallelize the workers (thread blocks) where each worker takes care of a block of rows of the attention matrix. In the backward pass (right), each worker takes care of a block of columns of the attention matrix.
3https://github.com/openai/triton/blob/main/python/tutorials/06-fused-attention.py
8
# 3.3 Work Partitioning Between Warps | 2307.08691#34 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08621 | 35 | Head Dimension From the recurrent perspective of Equation (1), the head dimension implies the memory capacity of hidden states. In the ablation study, we reduce the default head dimension from
10
256 to 64, i.e., 64 for queries and keys, and 128 for values. We keep the hidden dimension dmodel the same so the number of heads increases. Experimental results in Table 6 show that the larger head dimension achieves better performance.
# 4 Conclusion
In this work, we propose retentive networks (RetNet) for sequence modeling, which enables various representations, i.e., parallel, recurrent, and chunkwise recurrent. RetNet achieves significantly better inference efficiency (in terms of memory, speed, and latency), favorable training parallelization, and competitive performance compared with Transformers. The above advantages make RetNet an ideal successor to Transformers for large language models, especially considering the deployment benefits brought by the O(1) inference complexity. In the future, we would like to scale up RetNet in terms of model size [CDH+22] and training steps. Moreover, retention can efficiently work with structured prompting [HSD+22b] by compressing long-term memory. We will also use RetNet as the backbone architecture to train multimodal large language models [HSD+22a, HDW+23, PWD+23]. In addition, we are interested in deploying RetNet models on various edge devices, such as mobile phones.
# Acknowledgement | 2307.08621#35 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 35 | 8
# 3.3 Work Partitioning Between Warps
As Section 3.2 describe how we schedule thread blocks, even within each thread block, we also have to decide how to partition the work between diï¬erent warps. We typically use 4 or 8 warps per thread block, and the partitioning is described in Fig. 3.
Forward pass. For each block, FLASHATTENTION splits K and V across 4 warps while keeping Q accessible by all warps. Each warp multiplies to get a slice of QK", then they need to multiply with a slice of V and communicate to add up the result. This is referred to as the âsplit-Kâ scheme. However, this is inefficient since all warps need to write their intermediate results out to shared memory, synchronize, then add up the intermediate results. These shared memory reads/writes slow down the forward pass in FLASHATTENTION. In FLASHATTENTION-2, we instead split Q across 4 warps while keeping K and V accessible by all warps. After each warp performs matrix multiply to get a slice of QK", they just need to multiply with their shared slice of V to get their corresponding slice of the output. There is no need for communication between warps. The reduction in shared memory reads/writes yields speedup (Section [4).
(a) FlashAttention (b) FlashAttention-2 | 2307.08691#35 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 35 | Data-centric AI. Over the last decade, the realm of data-centric AI (Chu et al., 2016; Motamedi et al., 2021) has witnessed substantial progress. Central to this concept is the belief that the quality of data (Hajij et al., 2021; Zha et al., 2023; Chen et al., 2023a;c;d) warrants the same level of importance as algorithms within the AI/ML lifecycle. As noted by Chu et al. (2016), for an effective engagement with diverse types of data across various domains, data cleaning processes should exhibit a higher degree of automation and adaptability. With the advent of the Transformer architecture (Vaswani et al., 2017b), a shift in the paradigm of language models has occurred. Models such as RoBERTa (Liu et al., 2019), BERT (Vaswani et al., 2017a), and Bard 10 all have incorporated this effective structure, stacking varying quantities of transformer blocks to create more potent models. This marked a turning point in NLP research, signifying a heightened emphasis on data as opposed to model structure. Presently, SOTA LLMs like ChatGPT also underscore this shift toward data. They employ user data to conduct Reinforcement Learning from Human Feedback (RLHF) (Ouyang et al., 2022a; Gao et al., 2022), which further aligns with the Data-centric AI philosophy. | 2307.08701#35 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 35 | # 4. Related Work
Analysis of Chain of Thought Faithfulness Recent work has analyzed CoT faithfulness in different ways than our work. Gao (2023) use Shapley analysis to show that cer- tain tokens of the CoT are much more important than others for the ï¬nal answer. Our work proposes different tests of CoT faithfulness with lower computational costs. Madaan & Yazdanbakhsh (2022) investigate CoT via counterfactual prompting and ï¬nd that some aspects of the prompt are less important than others for the ï¬nal answer reached. We inter- vene on the CoT produced by the model rather than few shot prompt examples and propose general tests for CoT faithful- ness. Turpin et al. (2023) discover examples of unfaithful CoT in adversarially constructed settings, showing that CoT reasoning is not always faithful. In that paper, the model produces CoT in the presence of biasing few-shot examples; while the modelâs ï¬nal answer is consistent with the bias provided by the prompt, the CoT gives a different expla- nation for the answer that does not reference the biasing
8
Measuring Faithfulness in Chain-of-Thought Reasoning | 2307.13702#35 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 36 | 4.2 Training Details To train the soft prompt module, we performed fine-tuning using two open-source LLMs: LLaMA-7B and Vicuna-7B. The specific training hyper-parameters are documented in Table 4.
3.6 Dense Retrieval We conducted the experiments on three popular dense retrievers: DPR (cross-encoder version), ColBERT, and BM25CE. The descrip- tions of the three models can be found in Section 2.1. We only trained DPR2 and ColBERT3. For BM25CE, we directly loaded the BERT model of the best DPR as a re-ranker to re-rank the tok ð documents retrieved by BM25.
Hyperparameters Batch Size Max Length Learning Rate Optimizer Early Stop Max epochs GPU LLaMA-7B 4 1024 3ð â 2 AdamW 5 100 1 A100 (80G) Vicuna-7B 2 1024 3ð â 2 AdamW 5 100 1 A100 (80G)
# 4 EXPERIMENTAL SETUP 4.1 Datasets
Table 4: Hyperparameters of soft prompt tuning | 2307.08303#36 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 36 | # Acknowledgement
We would like to acknowledge Jiayu Ding, Songlin Yang, and colleagues from MSRA System Group for the helpful discussions.
# References
[BKH16] Jimmy Lei Ba, Jamie Ryan Kiros, and Geoffrey E Hinton. Layer normalization. arXiv preprint arXiv:1607.06450, 2016.
[BMR+20] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sand- hini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. Language models are few-shot learners. In Advances in Neural Information Processing Systems, volume 33, pages 1877â1901. Curran Associates, Inc., 2020. | 2307.08621#36 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 36 | (a) FlashAttention (b) FlashAttention-2
KT | Warp 1-4 V Q \ | Warp.) f â | Warp2 ' Warp 1-4 ' â 2 { } | Warp 3! ce 2 f â â | Warp4 } â Accessed by all warps. {split across different warps
KT ina aii . â 1 Y y 1 | Warp1 | Warp2 | Warp3 | Warp4 ! ' 1 | 1 1 v @ hovel Aeon 7 ne 7 nnn ; Warp 1-4 Accessed by all warps (7777 spit cos dierent warps toes
Figure 3: Work partitioning between diï¬erent warps in the forward pass
Backward pass. Similarly for the backward pass, we choose to partition the warps to avoid the âsplit-Kâ scheme. However, it still requires some synchronization due to the more complicated dependency between all the diï¬erent inputs and gradients Q, K, V, O, dO, dQ, dK, dV. Nevertheless, avoiding âsplit-Kâ reduces shared memory reads/writes and again yields speedup (Section 4). | 2307.08691#36 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.13702 | 36 | 8
Measuring Faithfulness in Chain-of-Thought Reasoning
context. In contrast, this work investigates non-adversarial settings to collect evidence about reasoning faithfulness un- der a wider variety of realistic conditions. Wei et al. (2022) test three hypotheses for why CoT prompting provides a per- formance boost: that it produces an equation to be evaluated, that it provides additional test-time compute, and that it the CoT better enables the model to access relevant information from pretraining. We expand upon the test-time compute only experiment presented in that work with the ï¬ller tokens experiment presented in §2.5, by evaluating a wider range of tasks and varying the number of ï¬ller tokens.
could explain how the model uses CoT, but we do not know if our hypotheses are exhaustive or if other hypotheses we did not investigate might be correct. Without ground truth information about the faithfulness of the reasoning sam- ple, it is also unclear how to weigh the importance of each experiment relative to the others in assessing faithfulness. A combination of our measurement techniques, plus addi- tional experiments, will be needed to determine the relative strengths of evidence from each type of experiment and build a more complete picture of reasoning faithfulness. | 2307.13702#36 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 37 | Model Train Eval Test BM25 W/O Aug InPars SPTAR-Tuning SPTAR-DR ð·ð¡ðð ð¡ â ð·ðð£ðð ð·ð¡ðð ð¡ ð¡ðððð + ð100 ð50 ðð£ðð ð¡ðððð + ð100 ðð£ðð + ð¹ððððððð ð·ðð£ðð ð·ð¡ðð ð¡ ð50 ð50 â ð¡ðððð ð50 ðð£ðð + ð¹ððððððð ð·ðð£ðð ð·ð¡ðð ð¡ | 2307.08303#37 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 37 | [BZB+20] Yonatan Bisk, Rowan Zellers, Ronan Le Bras, Jianfeng Gao, and Yejin Choi. Piqa: Reasoning about physical commonsense in natural language. In Thirty-Fourth AAAI Conference on Artificial Intelligence, 2020.
[CCWG21] Mingda Chen, Zewei Chu, Sam Wiseman, and Kevin Gimpel. Summscreen: A dataset for abstractive screenplay summarization. arXiv preprint arXiv:2104.07091, 2021.
[CDH+22] Zewen Chi, Li Dong, Shaohan Huang, Damai Dai, Shuming Ma, Barun Patra, Saksham Singhal, Payal Bajaj, Xia Song, Xian-Ling Mao, Heyan Huang, and Furu Wei. On the representation collapse of sparse mixture of experts. In Advances in Neural Information Processing Systems, 2022.
[CLC+19] Christopher Clark, Kenton Lee, Ming-Wei Chang, Tom Kwiatkowski, Michael Collins, and Kristina Toutanova. BoolQ: Exploring the surprising difficulty of natural yes/no questions. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics, pages 2924â2936, 2019. | 2307.08621#37 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 37 | Increasing block sizes generally reduces shared memory loads/stores, but increases Tuning block sizes the number of registers required and the total amount of shared memory. Past a certain block size, register spilling causes signiï¬cant slowdown, or the amount of shared memory required is larger than what the GPU has available, and the kernel cannot run at all. Typically we choose blocks of size {64, 128} à {64, 128}, depending on the head dimension ð and the device shared memory size.
We manually tune for each head dimensions since there are essentially only 4 choices for block sizes, but
this could beneï¬t from auto-tuning to avoid this manual labor. We leave this to future work.
# 4 Empirical Validation
We evaluate the impact of using FlashAttention-2 to train Transformer models. ⢠Benchmarking attention. We measure the runtime of FlashAttention-2 across diï¬erent sequence lengths and compare it to a standard implementation in PyTorch, FlashAttention, and FlashAttention in Triton. We conï¬rm that FlashAttention-2 is 1.7-3.0à faster than FlashAttention, 1.3-2.5à faster than FlashAttention in Triton, and 3-10à faster than a standard attention implementation.
9 | 2307.08691#37 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 37 | 9
Preprint
Evaluation of LLMs. Evaluating the open-ended instruction-following ability of LLMs is often neglected by previous works (Chung et al., 2022; Anil et al., 2023), though they conduct a series of benchmark evaluations centered around factuality (Hendrycks et al., 2020) and reasoning (Bisk et al., 2020) for their pre-training models. Similarly, the frameworks proposed by Liang et al. (2022) and Gao et al. (2021) focus more on the evaluation of the base models but not on the evaluation of the IFT models, where open-ended instruction-following capability are supposed to be prioritized. Since instruction-following is a general ability but the scope of benchmarks is limited, the recent works such as Koala (Geng et al., 2023), Vicuna (Chiang et al., 2023), Self-Instruct (Wang et al., 2022), and Wiz- ardLM (Xu et al., 2023) all provide the instruction sets they collected and some of them also include the categories of the instructions for the evaluation of instruction-tuned LLMs. There are also some leaderboards like Alpaca-Eval (Li et al., 2023) measuring the modelâs instruction-following ability. Leveraging these recent advancements, we evaluate our models on human instruction sets.
# 9 CONCLUSION | 2307.08701#37 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 37 | Techniques to Increase Reasoning Faithfulness Prior work has proposed methods to generate reasoning that are more likely to be faithful by construction, due to the way that the reasoning or ï¬nal answer is produced. Lyu et al. (2023) generate a program in a domain-speciï¬c language and execute the program (e.g., using a Python interpreter) to produce the ï¬nal answer; this process ensures that the gen- erating program is not post-hoc but rather directly used to produce the ï¬nal answer. Creswell & Shanahan (2022) and Creswell et al. (2023) use a language model to choose state- ments from a context and then make inferences from those selected statements in a separate context window. Radhakr- ishnan et al. (2023) answer questions by decomposing them into subquestions, ï¬nding that this approach leads to more faithful reasoning according to our early answering and adding mistakes metrics. Some of the potential faithfulness problems raised in our work (i.e., post-hoc reasoning) may apply to the methods above. The metrics we propose may be useful for measuring the extent to which those methods improve faithfulness. | 2307.13702#37 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 38 | [DFE+22] Tri Dao, Dan Fu, Stefano Ermon, Atri Rudra, and Christopher Ré. Flashattention: Fast and memory-efficient exact attention with io-awareness. Advances in Neural Information Processing Systems, 35:16344â16359, 2022.
[DFS+22] Tri Dao, Daniel Y Fu, Khaled K Saab, Armin W Thomas, Atri Rudra, and Christopher Ré. Hungry hungry hippos: Towards language modeling with state space models. arXiv preprint arXiv:2212.14052, 2022.
11
[DMI+21] Jesse Dodge, Ana Marasovi´c, Gabriel Ilharco, Dirk Groeneveld, Margaret Mitchell, and Matt Gardner. Documenting large webtext corpora: A case study on the colossal clean crawled corpus. In Conference on Empirical Methods in Natural Language Processing, 2021.
[GBB+20] Leo Gao, Stella Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, et al. The Pile: An 800GB dataset of diverse text for language modeling. arXiv preprint arXiv:2101.00027, 2020. | 2307.08621#38 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 38 | 9
FlashAttention-2 reaches up to 230 TFLOPs/s, 73% of the theoretical maximum TFLOPs/s on A100 GPUs.
⢠End-to-end training speed When used end-to-end to train GPT-style models of size 1.3B and 2.7B on sequence lengths either 2k or 8k, FlashAttention-2 yields up to 1.3à speedup compared to FlashAt- tention and 2.8à speedup compared to a baseline without FlashAttention. FlashAttention-2 reaches up to 225 TFLOPs/s (72% model FLOPs utilization) per A100 GPU.
# 4.1 Benchmarking Attention | 2307.08691#38 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 38 | # 9 CONCLUSION
In conclusion, our study reveals significant insights about the influence of data quality over quantity in IFT. Through our proposed data-filtering method, we have demonstrated that relying on a small subset of high-quality IFT data can lead to LLMs that exhibit enhanced instruction-following capabilities, while also offering substantial computational advantages. Notably, our method proves versatile across different rating dimensions (e.g., Accuracy and helpfulness), LLM filters (e.g., ChatGPT and Claude-2), base model families (e.g., LLaMA-1 and LLaMA-2), model sizes (e.g., 7B and 13B), dataset types(e.g., machine-generated and human-written). By emphasizing the importance of data quality, we advocate for a transition in the existing paradigm where data accumulation has been a primary focus. This perspective transition can lead to more meaningful advancements in the field of LLMs, making models more aligned with human intentions and less prone to errors induced by poor-quality data.
# REFERENCES
Rohan Anil, Andrew M Dai, Orhan Firat, Melvin Johnson, Dmitry Lepikhin, Alexandre Passos, Siamak Shakeri, Emanuel Taropa, Paige Bailey, Zhifeng Chen, et al. Palm 2 technical report. arXiv preprint arXiv:2305.10403, 2023. | 2307.08701#38 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 38 | Techniques to Elicit Language Model Reasoning Prior work has proposed various methods to improve language model performance by eliciting reasoning before the answer. Approaches include generating subquestions (Dua et al., 2022; Zhou et al., 2023), producing a tree of thoughts (Yao et al., 2023a), devising and executing a plan for answer- ing the question (Wang et al., 2023), and having language models debate to reach an answer (Du et al., 2023), among others. These approaches share a similar structure to chain of thought, where the language model produces earlier steps of reasoning and then conditions on them to produce later steps. As such, we believe that our methods for assessing faithfulness should hold for these methods as well. | 2307.13702#38 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 39 | Table 3: Dataset partition for different methods.
Hyperparameters DPR ColBERT Batch Size Max Length Learning Rate DDP Optimizer Early Stop Max epochs GPU 32 350 2ð â 5 No AdamW 10 20 4 A100s (40G) 32 350 2ð â 5 Yes AdamW None 20 4 A100s (40G)
Table 5: Hyperparameters of DR Models
We conducted experiments on two datasets MS MARCO [20] and FiQA-2018 [17] from BEIR [32]. The description of the two datasets is shown in Table 2. We follow BEIR [32] to report the metrics on the evaluation dataset instead of test data for MS MARCO, so, for MS MARCO, ð·ð¡ðð ð¡ is the same as ð·ðð£ðð . As shown in Table 3:
2https://github.com/beir-cellar/beir/blob/main/examples/retrieval/training/trai n_sbert.py
3https://github.com/thakur-nandan/beir-ColBERT | 2307.08303#39 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 39 | [GGR21] Albert Gu, Karan Goel, and Christopher Ré. Efficiently modeling long sequences with structured state spaces. arXiv preprint arXiv:2111.00396, 2021.
[HCP+21] Luyang Huang, Shuyang Cao, Nikolaus Parulian, Heng Ji, and Lu Wang. Efficient attentions for long document summarization. arXiv preprint arXiv:2104.02112, 2021.
[HDW+23] Shaohan Huang, Li Dong, Wenhui Wang, Yaru Hao, Saksham Singhal, Shuming Ma, Tengchao Lv, Lei Cui, Owais Khan Mohammed, Qiang Liu, Kriti Aggarwal, Zewen Chi, Johan Bjorck, Vishrav Chaudhary, Subhojit Som, Xia Song, and Furu Wei. Language is not all you need: Aligning perception with language models. ArXiv, abs/2302.14045, 2023.
[HG16] Dan Hendrycks and Kevin Gimpel. Gaussian error linear units (GELUs). arXiv: Learning, 2016.
[HS97] Sepp Hochreiter and Jürgen Schmidhuber. Long short-term memory. Neural Computa- tion, 9:1735â1780, November 1997. | 2307.08621#39 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 39 | # 4.1 Benchmarking Attention
We measure the runtime of diï¬erent attention methods on an A100 80GB SXM4 GPU for diï¬erent settings (without / with causal mask, head dimension 64 or 128). We report the results in Fig. 4, Fig. 5 and Fig. 6, showing that FlashAttention-2 is around 2à faster than FlashAttention and FlashAttention in xformers (the âcutlassâ implementation). FlashAttention-2 is around 1.3-1.5à faster than FlashAtten- tion in Triton in the forward pass and around 2à faster in the backward pass. Compared to a standard attention implementation in PyTorch, FlashAttention-2 can be up to 10à faster.
Benchmark setting: we vary the sequence length from 512, 1k, ..., 16k, and set batch size so that the total number of tokens is 16k. We set hidden dimension to 2048, and head dimension to be either 64 or 128 (i.e., 32 heads or 16 heads). To calculate the FLOPs of the forward pass, we use:
4 · seqlen2 · head dimension · number of heads. | 2307.08691#39 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 39 | Amanda Askell, Yuntao Bai, Anna Chen, Dawn Drain, Deep Ganguli, Tom Henighan, Andy Jones, Nicholas Joseph, Ben Mann, Nova DasSarma, et al. A general language assistant as a laboratory for alignment. arXiv preprint arXiv:2112.00861, 2021.
Yuntao Bai, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, et al. Constitutional ai: Harmlessness from ai feedback. arXiv preprint arXiv:2212.08073, 2022.
Yonatan Bisk, Rowan Zellers, Jianfeng Gao, Yejin Choi, et al. Piqa: Reasoning about physical commonsense in natural language. In Proceedings of the AAAI conference on artificial intelligence, 2020.
Jiuhai Chen, Lichang Chen, and Tianyi Zhou. It takes one to tango but more make trouble? in-context training with different number of demonstrations. arXiv preprint arXiv:2303.08119, 2023a. | 2307.08701#39 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 39 | Additionally, our work analyzed RLHF-ï¬netuned models, which may generate reasoning whose faithfulness is differ- ent from that of other models such as pretrained LLMs. For example, pretrained LLMs may be more likely to condi- tion strongly on text they have generated, since they are trained to generate the most plausible completion given some input, rather than maximize the overall human-judged quality of the completion. Pretrained LLMs may thus show fewer signs of post-hoc reasoning, e.g., being more likely to change their ï¬nal answer when mistakes are added to the CoT. Overall, a promising avenue for future work is to investigate whether training schemes different from RLHF are more effective at eliciting faithful reasoning from LLMs.
# 6. Conclusion | 2307.13702#39 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 40 | [HSD+22a] Yaru Hao, Haoyu Song, Li Dong, Shaohan Huang, Zewen Chi, Wenhui Wang, Shum- ing Ma, and Furu Wei. Language models are general-purpose interfaces. ArXiv, abs/2206.06336, 2022.
[HSD+22b] Yaru Hao, Yutao Sun, Li Dong, Zhixiong Han, Yuxian Gu, and Furu Wei. Structured prompting: Scaling in-context learning to 1,000 examples. ArXiv, abs/2212.06713, 2022.
[KLBA+22] Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Muñoz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, and Harm de Vries. The Stack: 3TB of permissively licensed source code. Preprint, 2022.
[KVPF20] Angelos Katharopoulos, Apoorv Vyas, Nikolaos Pappas, and François Fleuret. Trans- formers are rnns: Fast autoregressive transformers with linear attention. In International Conference on Machine Learning, pages 5156â5165. PMLR, 2020. | 2307.08621#40 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 40 | 4 · seqlen2 · head dimension · number of heads.
With causal mask, we divide this number by 2 to account for the fact that approximately only half of the entries are calculated. To get the FLOPs of the backward pass, we multiply the forward pass FLOPs by 2.5 (since there are 2 matmuls in the forward pass and 5 matmuls in the backward pass, due to recomputation).
Attention forward + backward speed (A100 80GB SXM4)
Attention forward + backward speed (A100 80GB SXM4)
(a) Without causal mask, head dimension 64 (b) Without causal mask, head dimension 128
(c) With causal mask, head dimension 64
(d) With causal mask, head dimension 128
Figure 4: Attention forward + backward speed on A100 GPU
10
Attention forward speed (A100 80GB SXM4)
mmm Pytorch 200 | Sim Flashattention @mm xformers a mam FlashAttention Triton a lm FlashAttention-2 © 150 Ps z E > 100 Q o Q % 50 2k 4k Sequence length | 2307.08691#40 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 40 | Lichang Chen, Jiuhai Chen, Tom Goldstein, Heng Huang, and Tianyi Zhou. Instructzero: Efficient instruction optimization for black-box large language models. arXiv preprint arXiv:2306.03082, 2023b.
Lichang Chen, Minhao Cheng, and Heng Huang. Backdoor learning on sequence to sequence models. arXiv preprint arXiv:2305.02424, 2023c.
Lichang Chen, Heng Huang, and Minhao Cheng. Ptp: Boosting stability and performance of prompt tuning with perturbation-based regularizer. arXiv preprint arXiv:2305.02423, 2023d.
Mark Chen, Jerry Tworek, Heewoo Jun, Qiming Yuan, Henrique Ponde de Oliveira Pinto, Jared Kaplan, Harri Edwards, Yuri Burda, Nicholas Joseph, Greg Brockman, et al. Evaluating large language models trained on code. arXiv preprint arXiv:2107.03374, 2021.
10
# Preprint
Yew Ken Chia, Pengfei Hong, Lidong Bing, and Soujanya Poria. Instructeval: Towards holistic evaluation of instruction-tuned large language models. arXiv preprint arXiv:2306.04757, 2023. | 2307.08701#40 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 40 | # 6. Conclusion
In this work, we investigate the faithfulness of reasoning samples produced by large language models using chain- of-thought prompting. We test various hypotheses of how chain of thought could provide unfaithful explanations of the modelâs reasoning, and apply these tasks across tasks and model size. Our experiments show large variation in the extent of post-hoc reasoning across tasks, and they provide evidence against the hypotheses that increased test-time compute or phrasing-encoded information are drivers of CoT improvement. We also see that the degree of post-hoc reasoning often shows inverse scaling, getting worse with increasingly capable models, suggesting that smaller models may be better to use if faithful reasoning is important. We hope that our metrics for evaluating CoT faithfulness open up avenues for increasing the faithfulness of CoT, building towards systems whose stated reasoning is trustworthy and veriï¬able.
# Author Contributions
# 5. Limitations
A key limitation of our investigation is that we do not have a separate way by which to understand the modelâs real internal reasoning process, without which we cannot know if the chain of thought is faithful to that reasoning process. Here, we collect evidence about various hypotheses that | 2307.13702#40 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 41 | Metric Dataset BM25 W/O Aug InPars SPTAR DPR ColBERT BM25CE DPR ColBERT BM25CE DPR ColBERT BM25CE NDCG MS MARCO 0.2284 0.2361 FiQA-2018 0.1726 0.1332 0.0819 0.1224 0.2288 0.2290 0.1901 0.2226 0.2418 0.2370 0.2311 0.2673 0.2580 0.2404 0.2445 0.2749 0.2647 0.2785 Avg Imp% over BM25 -34.17% -56.02% -1.44% -11.15% 3.08% 7.30% 7.30% 11.82% 16.94% MAP MS MARCO 0.1803 0.1784 FiQA-2018 0.1371 0.0934 0.0643 0.0896 0.1806 0.1684 0.1486 0.1654 0.1939 0.1854 0.1790 0.2052 0.2047 0.1804 0.1958 0.2154 0.2104 0.2130 Avg Imp% over BM25 -35.74% -57.1% -2.7% -12.46% 5.74% | 2307.08303#41 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 41 | [LDM12] Hector Levesque, Ernest Davis, and Leora Morgenstern. The winograd schema chal- lenge. In Thirteenth International Conference on the Principles of Knowledge Repre- sentation and Reasoning, 2012.
[LH19] Ilya Loshchilov and Frank Hutter. Decoupled weight decay regularization. In Interna- tional Conference on Learning Representations, 2019.
[MRL+17] Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. Lsdsem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46â51, 2017.
[MWH+22] Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, and Furu Wei. TorchScale: Transformers at scale. CoRR, abs/2211.13184, 2022. | 2307.08621#41 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 41 | Attention forward speed (A100 80GB SXM4) 224 227 222 224 223 mmm Pytorch 200 | Bam Flashattention @mm xformers a mam FlashAttention Triton a lm FlashAttention-2 © 150 Ps z E > 100 Q o Q % 50 Sequence length
# (a) Without causal mask, head dimension 64
(b) Without causal mask, head dimension 128
# Attention forward speed (A100 80GB SXM4)
# Attention forward speed (A100 80GB SXM4)
mmm Pytorch 200 | Bim Flashattention mmm xformers a lm FlashAttention Triton a lm FlashAttention-2 © 150 Ps z E > 100 Q o Q % 50 2k 4k Sequence length
mmm Pytorch 200 | Bim Flashattention mmm xformers a mmm FlashAttention Triton a lm FlashAttention-2 © 150 Ps z E > 100 Q o Q % 50 2k 4k Sequence length
(c) With causal mask, head dimension 64
(d) With causal mask, head dimension 128
Figure 5: Attention forward speed on A100 GPU | 2307.08691#41 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 41 | Wei-Lin Chiang, Zhuohan Li, Zi Lin, Ying Sheng, Zhanghao Wu, Hao Zhang, Lianmin Zheng, Siyuan Zhuang, Yonghao Zhuang, Joseph E. Gonzalez, Ion Stoica, and Eric P. Xing. Vicuna: An open-source chatbot impressing gpt-4 with 90%* chatgpt quality, March 2023. URL https: //lmsys.org/blog/2023-03-30-vicuna/.
Xu Chu, Ihab F Ilyas, Sanjay Krishnan, and Jiannan Wang. Data cleaning: Overview and emerging challenges. In Proceedings of the 2016 international conference on management of data, pp. 2201â2206, 2016.
Hyung Won Chung, Le Hou, Shayne Longpre, Barret Zoph, Yi Tay, William Fedus, Eric Li, Xuezhi Wang, Mostafa Dehghani, Siddhartha Brahma, et al. Scaling instruction-finetuned language models. arXiv preprint arXiv:2210.11416, 2022. | 2307.08701#41 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 41 | Tamera Lanham led the project, drafted the paper, and conducted all experimental work. Jan Brauner, Samuel R. Bowman, and Ethan Perez provided feedback on the paper draft. Jared Kaplan, Samuel R. Bowman, and Ethan Perez provided feedback throughout the course of the project. Tamera Lanham scoped out the project direc- tion, with help from Ethan Perez. All other listed authors
9
Measuring Faithfulness in Chain-of-Thought Reasoning
contributed to the development of otherwise-unpublished models, infrastructure, or contributions that made our exper- iments possible.
# Acknowledgements
Henighan, T., Child, R., Ramesh, A., Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D. Language models are few-shot learners, 2020. URL https://ar xiv.org/abs/2005.14165. | 2307.13702#41 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08621 | 42 | [OSG+23] Antonio Orvieto, Samuel L. Smith, Albert Gu, Anushan Fernando, Caglar Gulcehre, Razvan Pascanu, and Soham De. Resurrecting recurrent neural networks for long sequences. ArXiv, abs/2303.06349, 2023.
12
[PAA+23] Bo Peng, Eric Alcaide, Quentin Anthony, Alon Albalak, Samuel Arcadinho, Huanqi Cao, Xin Cheng, Michael Chung, Matteo Grella, Kranthi Kiran GV, Xuzheng He, Haowen Hou, Przemyslaw Kazienko, Jan Kocon, Jiaming Kong, Bartlomiej Koptyra, Hayden Lau, Krishna Sri Ipsit Mantri, Ferdinand Mom, Atsushi Saito, Xiangru Tang, Bolun Wang, Johan S. Wind, Stansilaw Wozniak, Ruichong Zhang, Zhenyuan Zhang, Qihang Zhao, Peng Zhou, Jian Zhu, and Rui-Jie Zhu. Rwkv: Reinventing rnns for the transformer era, 2023. | 2307.08621#42 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 42 | (c) With causal mask, head dimension 64
(d) With causal mask, head dimension 128
Figure 5: Attention forward speed on A100 GPU
Just running the same implementation on H100 GPUs (using no special instructions to make use of new features such as TMA and 4th-gen Tensor Cores), we obtain up to 335 TFLOPs/s (Fig. 7). We expect that by using new instructions, we can obtain another 1.5x-2x speedup on H100 GPUs. We leave that to future work.
# 4.2 End-to-end Performance
We measure the training throughput of GPT-style models with either 1.3B or 2.7B parameters, on 8ÃA100 80GB SXM. As shown in Table 1, FlashAttention-2 yields 2.8Ã speedup compared to a baseline without FlashAttention and 1.3Ã speedup compared to FlashAttention-2, reaching up to 225 TFLOPs/s per A100 GPU.
Note that we calculate the FLOPs by the formula, following Megatron-LM [16] (and many other papers and libraries):
6 · seqlen · number of params + 12 · number of layers · hidden dim · seqlen2. | 2307.08691#42 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 42 | Ning Ding, Yulin Chen, Bokai Xu, Yujia Qin, Zhi Zheng, Shengding Hu, Zhiyuan Liu, Maosong Sun, and Bowen Zhou. Enhancing chat language models by scaling high-quality instructional conversations. arXiv preprint arXiv:2305.14233, 2023.
Dolly. Free dolly: Introducing the worldâs first truly open instruction-tuned llm. URL https://www.databricks.com/blog/2023/04/12/ Blog Post, 2023. dolly-first-open-commercially-viable-instruction-tuned-llm.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel Stanovsky, Sameer Singh, and Matt Gardner. DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs. In Proc. of NAACL, 2019.
Yann Dubois, Xuechen Li, Rohan Taori, Tianyi Zhang, Ishaan Gulrajani, Jimmy Ba, Carlos Guestrin, Percy Liang, and Tatsunori B Hashimoto. Alpacafarm: A simulation framework for methods that learn from human feedback. arXiv preprint arXiv:2305.14387, 2023. | 2307.08701#42 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 42 | We thank Alex Ray, Buck Shlegeris, Ian McKenzie, Kshitij Sachan, Kyle McDonell, Leo Gao, Miles Turpin, Owain Evans, Paul Christiano, Peter Barnett, Ryan Greenblatt, Thomas Kwa, William Saunders, and Vivek Hebbar for helpful feedback and discussion.
# References
Andreas, J. Language models as agent models. In Find- ings of the Association for Computational Linguistics: EMNLP 2022, pp. 5769â5779, Abu Dhabi, United Arab Emirates, December 2022. Association for Computa- tional Linguistics. URL https://aclanthology .org/2022.findings-emnlp.423. | 2307.13702#42 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 43 | Table 6: SPTAR vs baseline models: a) BM25 is an unsupervised method; b) W/O Aug is a supervised mehtod and the three DR models are trained on dataset without augmentation; c) InPars [2] utilizes human-written prompts and it has no soft prompt filter mechanism; d) Table 3 docuemts the data splits for each method.
4.3 Evaluation Metrics In the context of text generation models, Perplexity is a commonly employed metric that quantifies the level of uncertainty exhibited by a language model when generating new tokens. This metric is defined as the exponentiated average negative log-likelihood of a sequence, and a lower perplexity value indicates a higher-quality language model. Perplexity is used to evaluate the soft prompt tuning and filter modules.
For DR models, we reporte the testing results of the checkpoint with the highest NDCG@10 score. Besides NDCG metric, we have also incorporated several other DR evaluation metrics to facilitate the assessment of the improvement achieved by synthesized aug- mented queries. These additional metrics include Mean Average Precision (MAP) [41] and Recall [41]. By utilizing these metrics, we can comprehensively evaluate the effectiveness of the synthesized augmented queries in enhancing the performance of DR models.
4.5 Research Questions An extensive set of experiments were designed to address the fol- lowing research questions: | 2307.08303#43 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 43 | [PMN+23] Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, and Christopher Ré. Hyena hierarchy: Towards larger convolutional language models. arXiv preprint arXiv:2302.10866, 2023. [PWD+23] Zhiliang Peng, Wenhui Wang, Li Dong, Yaru Hao, Shaohan Huang, Shuming Ma, and Furu Wei. Kosmos-2: Grounding multimodal large language models to the world. ArXiv, abs/2306.14824, 2023.
[RZL17] Prajit Ramachandran, Barret Zoph, and Quoc V. Le. Swish: a self-gated activation function. arXiv: Neural and Evolutionary Computing, 2017.
[SDP+22] Yutao Sun, Li Dong, Barun Patra, Shuming Ma, Shaohan Huang, Alon Benhaim, Vishrav Chaudhary, Xia Song, and Furu Wei. A length-extrapolatable transformer. arXiv preprint arXiv:2212.10554, 2022. | 2307.08621#43 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 43 | 6 · seqlen · number of params + 12 · number of layers · hidden dim · seqlen2.
The ï¬rst term accounts for the FLOPs due to weightâinput multiplication, and the second term accounts for the FLOPs due to attention. However, one can argue that the second term should be halved, as with causal mask we only need to compute approximately half the number of elements in attention. We choose to follow the formula from the literature (without dividing the attention FLOPs by 2) for consistency.
# 5 Discussion and Future Directions
FlashAttention-2 is 2Ã faster than FlashAttention, which means that we can train models with 16k longer context for the same price as previously training a 8k context model. We are excited about how this can
11
(a) Without causal mask, head dimension 64 (b) Without causal mask, head dimension 128 (c) With causal mask, head dimension 64 (d) With causal mask, head dimension 128
Figure 6: Attention backward speed on A100 GPU
Table 1: Training speed (TFLOPs/s/GPU) of GPT-style models on 8ÃA100 GPUs. FlashAttention-2 reaches up to 225 TFLOPs/s (72% model FLOPs utilization). We compare against a baseline running without FlashAttention. | 2307.08691#43 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 43 | Leo Gao, Jonathan Tow, Stella Biderman, Sid Black, Anthony DiPofi, Charles Foster, Laurence Gold- ing, Jeffrey Hsu, Kyle McDonell, Niklas Muennighoff, Jason Phang, Laria Reynolds, Eric Tang, Anish Thite, Ben Wang, Kevin Wang, and Andy Zou. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo.5371628.
Leo Gao, John Schulman, and Jacob Hilton. Scaling laws for reward model overoptimization. arXiv preprint arXiv:2210.10760, 2022.
Xinyang Geng, Arnav Gudibande, Hao Liu, Eric Wallace, Pieter Abbeel, Sergey Levine, and Dawn Song. Koala: A dialogue model for academic research. Blog post, April 2023. URL https://bair.berkeley.edu/blog/2023/04/03/koala/.
Mustafa Hajij, Ghada Zamzmi, Karthikeyan Natesan Ramamurthy, and Aldo Guzman Saenz. Data- centric ai requires rethinking data notion. arXiv preprint arXiv:2110.02491, 2021. | 2307.08701#43 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 43 | Christiano, P. F., Leike, J., Brown, T., Martic, M., Legg, S., and Amodei, D. Deep reinforcement learning from hu- man preferences. In Guyon, I., Luxburg, U. V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., and Garnett, R. (eds.), Advances in Neural Information Processing Systems, volume 30. Curran Associates, Inc., 2017. URL https://proceedings.neurips.cc/paper files/paper/2017/file/d5e2c0adad503 c91f91df240d0cd4e49-Paper.pdf.
Clark, P., Cowhey, I., Etzioni, O., Khot, T., Sabharwal, A., Schoenick, C., and Tafjord, O. Think you have solved question answering? try arc, the ai2 reasoning challenge. arXiv preprint 1803.05457, 2018. | 2307.13702#43 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 44 | 4.5 Research Questions An extensive set of experiments were designed to address the fol- lowing research questions:
RQ1: Can the proposed SPTAR framework achieve improved performance on DR tasks over the baseline models? (Section 5.1) RQ2: During the soft prompt tuning process, does the soft prompt tuning module indeed distill the knowledge from the dataset to the learned soft prompt? What factors contribute to the learned soft prompts? (Section 5.2.1)
RQ3: What are the costs of the soft prompt tuning module? Does the soft prompt tuning module greatly increase the training time and computational resources? (Section 5.2.2)
RQ4: What specific role does the soft prompt filter play in SPTAR? Which metric can be utilized to filter the ð example document-query pairs? (Section 5.2.3)
RQ5: Can the weak data filter further improve the performances of DR models? (Section 5.2.4) | 2307.08303#44 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 44 | [Sha19] Noam M. Shazeer. Fast transformer decoding: One write-head is all you need. ArXiv, abs/1911.02150, 2019.
[SLP+21] Jianlin Su, Yu Lu, Shengfeng Pan, Bo Wen, and Yunfeng Liu. Roformer: Enhanced transformer with rotary position embedding. arXiv preprint arXiv:2104.09864, 2021. [SPP+19] Mohammad Shoeybi, Mostofa Patwary, Raul Puri, Patrick LeGresley, Jared Casper, and Bryan Catanzaro. Megatron-LM: Training multi-billion parameter language models using model parallelism. arXiv preprint arXiv:1909.08053, 2019.
[SSI+22] Uri Shaham, Elad Segal, Maor Ivgi, Avia Efrat, Ori Yoran, Adi Haviv, Ankit Gupta, Wenhan Xiong, Mor Geva, Jonathan Berant, et al. Scrolls: Standardized comparison over long language sequences. arXiv preprint arXiv:2201.03533, 2022. | 2307.08621#44 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 44 | Model GPT3-1.3B 2k context GPT3-1.3B 8k context GPT3-2.7B 2k context GPT3-2.7B 8k context Without FlashAttention FlashAttention FlashAttention-2 142 TFLOPs/s 72 TFLOPS/s 149 TFLOPs/s 80 TFLOPs/s 189 TFLOPs/s 170 TFLOPs/s 189 TFLOPs/s 175 TFLOPs/s 196 TFLOPs/s 220 TFLOPs/s 205 TFLOPs/s 225 TFLOPs/s
be used to understand long books and reports, high resolution images, audio and video. FlashAttention-2 will also speed up training, ï¬netuning, and inference of existing models. | 2307.08691#44 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 44 | Dan Hendrycks, Collin Burns, Steven Basart, Andy Zou, Mantas Mazeika, Dawn Song, and arXiv preprint Jacob Steinhardt. Measuring massive multitask language understanding. arXiv:2009.03300, 2020.
Miyoung Ko, Jinhyuk Lee, Hyunjae Kim, Gangwoo Kim, and Jaewoo Kang. Look at the first sentence: Position bias in question answering. arXiv preprint arXiv:2004.14602, 2020.
Andreas Köpf, Yannic Kilcher, Dimitri von Rütte, Sotiris Anagnostidis, Zhi-Rui Tam, Keith Stevens, Abdullah Barhoum, Nguyen Minh Duc, Oliver Stanley, Richárd Nagyfi, et al. Openassistant conversationsâdemocratizing large language model alignment. arXiv preprint arXiv:2304.07327, 2023.
Xuechen Li, Tianyi Zhang, Yann Dubois, Rohan Taori, Ishaan Gulrajani, Carlos Guestrin, Percy Liang, and Tatsunori B. Hashimoto. Alpacaeval: An automatic evaluator of instruction-following models. https://github.com/tatsu-lab/alpaca_eval, 2023.
11
Preprint | 2307.08701#44 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.13702 | 44 | Bai, Y., Jones, A., Ndousse, K., Askell, A., Chen, A., Das- Sarma, N., Drain, D., Fort, S., Ganguli, D., Henighan, T., Joseph, N., Kadavath, S., Kernion, J., Conerly, T., El-Showk, S., Elhage, N., Hatï¬eld-Dodds, Z., Hernandez, D., Hume, T., Johnston, S., Kravec, S., Lovitt, L., Nanda, N., Olsson, C., Amodei, D., Brown, T., Clark, J., McCan- dlish, S., Olah, C., Mann, B., and Kaplan, J. Training a helpful and harmless assistant with reinforcement learn- ing from human feedback. arXiv preprint 2204.05862, 2022.
Creswell, A. and Shanahan, M. Faithful reasoning using large language models. arXiv preprint 2208.14271, 2022.
Creswell, A., Shanahan, M., and Higgins, I. Selection- inference: Exploiting large language models for inter- pretable logical reasoning. In The Eleventh International Conference on Learning Representations, 2023. URL https://openreview.net/forum?id=3Pf3 Wg6o-A4. | 2307.13702#44 | Measuring Faithfulness in Chain-of-Thought Reasoning | Large language models (LLMs) perform better when they produce step-by-step,
"Chain-of-Thought" (CoT) reasoning before answering a question, but it is
unclear if the stated reasoning is a faithful explanation of the model's actual
reasoning (i.e., its process for answering the question). We investigate
hypotheses for how CoT reasoning may be unfaithful, by examining how the model
predictions change when we intervene on the CoT (e.g., by adding mistakes or
paraphrasing it). Models show large variation across tasks in how strongly they
condition on the CoT when predicting their answer, sometimes relying heavily on
the CoT and other times primarily ignoring it. CoT's performance boost does not
seem to come from CoT's added test-time compute alone or from information
encoded via the particular phrasing of the CoT. As models become larger and
more capable, they produce less faithful reasoning on most tasks we study.
Overall, our results suggest that CoT can be faithful if the circumstances such
as the model size and task are carefully chosen. | http://arxiv.org/pdf/2307.13702 | Tamera Lanham, Anna Chen, Ansh Radhakrishnan, Benoit Steiner, Carson Denison, Danny Hernandez, Dustin Li, Esin Durmus, Evan Hubinger, Jackson Kernion, Kamilė Lukošiūtė, Karina Nguyen, Newton Cheng, Nicholas Joseph, Nicholas Schiefer, Oliver Rausch, Robin Larson, Sam McCandlish, Sandipan Kundu, Saurav Kadavath, Shannon Yang, Thomas Henighan, Timothy Maxwell, Timothy Telleen-Lawton, Tristan Hume, Zac Hatfield-Dodds, Jared Kaplan, Jan Brauner, Samuel R. Bowman, Ethan Perez | cs.AI, cs.CL, cs.LG | null | null | cs.AI | 20230717 | 20230717 | [] |
2307.08303 | 45 | RQ5: Can the weak data filter further improve the performances of DR models? (Section 5.2.4)
4.4 Baseline Methods We choose three baseline methods: BM25, Without Augmentation (W/O Aug), and InPars [2] (Section 2.2). The training, evaluation, and testing datasets are documented in Section 4.1. For BM25 [25], we use Anserini [15] with the default Lucene parameters(ð = 0.9 and ð = 0.4). The differences between InPars [2] and SPTAR are twofold: a) InPars [2] utilizes the human-written prompt while SPTAR utilizes an optimized soft prompt; b) SPTAR has a soft prompt filter module to select example document-query pairs. To make it a fair comparison with InPars [2], we choose the same example document-query pairs in the prompt of SPTAR for InPars [2] and utilize InParsâ original human-written prompt to prompt the LLaMA and Vicuna to obtain weak document-query pairs. We find for InParsâ human-written prompt, the quality of generated weak document-query pairs of Vicuna is much better than that of LLaMA, so, for InPars [2], we choose Vicuna as the weak data generator. | 2307.08303#45 | Soft Prompt Tuning for Augmenting Dense Retrieval with Large Language Models | Dense retrieval (DR) converts queries and documents into dense embeddings and
measures the similarity between queries and documents in vector space. One of
the challenges in DR is the lack of domain-specific training data. While DR
models can learn from large-scale public datasets like MS MARCO through
transfer learning, evidence shows that not all DR models and domains can
benefit from transfer learning equally. Recently, some researchers have
resorted to large language models (LLMs) to improve the zero-shot and few-shot
DR models. However, the hard prompts or human-written prompts utilized in these
works cannot guarantee the good quality of generated weak queries. To tackle
this, we propose soft prompt tuning for augmenting DR (SPTAR): For each task,
we leverage soft prompt-tuning to optimize a task-specific soft prompt on
limited ground truth data and then prompt the LLMs to tag unlabeled documents
with weak queries, yielding enough weak document-query pairs to train
task-specific dense retrievers. We design a filter to select high-quality
example document-query pairs in the prompt to further improve the quality of
weak tagged queries. To the best of our knowledge, there is no prior work
utilizing soft prompt tuning to augment DR models. The experiments demonstrate
that SPTAR outperforms the unsupervised baselines BM25 and the recently
proposed LLMs-based augmentation method for DR. | http://arxiv.org/pdf/2307.08303 | Zhiyuan Peng, Xuyang Wu, Yi Fang | cs.IR, cs.AI, cs.CL, cs.LG | fix typos | null | cs.IR | 20230717 | 20230829 | [
{
"id": "2302.13971"
},
{
"id": "2007.00808"
},
{
"id": "2304.09542"
},
{
"id": "2104.08663"
},
{
"id": "2004.04906"
},
{
"id": "2204.07496"
},
{
"id": "2209.11755"
},
{
"id": "2112.01488"
},
{
"id": "2103.08493"
},
{
"id": "2003.06713"
},
{
"id": "2305.02156"
},
{
"id": "2109.01652"
},
{
"id": "2207.07087"
}
] |
2307.08621 | 45 | [VSP+17] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. Attention is all you need. In Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems 2017, 4-9 December 2017, Long Beach, CA, USA, pages 6000â 6010, 2017.
[WH18] Yuxin Wu and Kaiming He. Group normalization. In Proceedings of the European conference on computer vision (ECCV), pages 3â19, 2018.
[WMD+22] Hongyu Wang, Shuming Ma, Li Dong, Shaohan Huang, Dongdong Zhang, and Furu Wei. DeepNet: Scaling Transformers to 1,000 layers. ArXiv, abs/2203.00555, 2022.
[WMH+22] Hongyu Wang, Shuming Ma, Shaohan Huang, Li Dong, Wenhui Wang, Zhiliang Peng, Yu Wu, Payal Bajaj, Saksham Singhal, Alon Benhaim, et al. Foundation transformers. arXiv preprint arXiv:2210.06423, 2022. | 2307.08621#45 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
2307.08691 | 45 | be used to understand long books and reports, high resolution images, audio and video. FlashAttention-2 will also speed up training, ï¬netuning, and inference of existing models.
In the near future, we plan to collaborate with researchers and engineers to make FlashAttention widely applicable in diï¬erent kinds of devices (e.g., H100 GPUs, AMD GPUs), as well as new data types such as FP8. As an immediate next step, we plan to optimize FlashAttention-2 for H100 GPUs to use new hardware features (TMA, 4th-gen Tensor Cores, fp8). Combining the low-level optimizations in FlashAttention-2 with high-level algorithmic changes (e.g., local, dilated, block-sparse attention) could allow us to train AI models with much longer context. We are also excited to work with compiler researchers to make these optimization techniques easily programmable.
12
Attention forward + backward speed (H100 80GB SXM5)
mmm Pytorch mm FlashAttention lm FlashAttention-2 F 300 288 294 296 cy 274 2 254 9 ais z F 200 = 15 1s! 161 16: 16 16 no] g Gb 100 ai| 86 87 6 n or 512 1k 2k 4k 8k 16k Sequence length
Attention forward + backward speed (H100 80GB SXM5) | 2307.08691#45 | FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning | Scaling Transformers to longer sequence lengths has been a major problem in
the last several years, promising to improve performance in language modeling
and high-resolution image understanding, as well as to unlock new applications
in code, audio, and video generation. The attention layer is the main
bottleneck in scaling to longer sequences, as its runtime and memory increase
quadratically in the sequence length. FlashAttention exploits the asymmetric
GPU memory hierarchy to bring significant memory saving (linear instead of
quadratic) and runtime speedup (2-4$\times$ compared to optimized baselines),
with no approximation. However, FlashAttention is still not nearly as fast as
optimized matrix-multiply (GEMM) operations, reaching only 25-40\% of the
theoretical maximum FLOPs/s. We observe that the inefficiency is due to
suboptimal work partitioning between different thread blocks and warps on the
GPU, causing either low-occupancy or unnecessary shared memory reads/writes. We
propose FlashAttention-2, with better work partitioning to address these
issues. In particular, we (1) tweak the algorithm to reduce the number of
non-matmul FLOPs (2) parallelize the attention computation, even for a single
head, across different thread blocks to increase occupancy, and (3) within each
thread block, distribute the work between warps to reduce communication through
shared memory. These yield around 2$\times$ speedup compared to FlashAttention,
reaching 50-73\% of the theoretical maximum FLOPs/s on A100 and getting close
to the efficiency of GEMM operations. We empirically validate that when used
end-to-end to train GPT-style models, FlashAttention-2 reaches training speed
of up to 225 TFLOPs/s per A100 GPU (72\% model FLOPs utilization). | http://arxiv.org/pdf/2307.08691 | Tri Dao | cs.LG | null | null | cs.LG | 20230717 | 20230717 | [
{
"id": "2004.05150"
},
{
"id": "2006.04768"
},
{
"id": "2305.13245"
},
{
"id": "2112.05682"
},
{
"id": "1909.08053"
},
{
"id": "1804.06826"
},
{
"id": "1805.02867"
},
{
"id": "1911.02150"
}
] |
2307.08701 | 45 | 11
Preprint
Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michihiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110, 2022.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692, 2019.
Shayne Longpre, Le Hou, Tu Vu, Albert Webson, Hyung Won Chung, Yi Tay, Denny Zhou, Quoc V Le, Barret Zoph, Jason Wei, et al. The flan collection: Designing data and methods for effective instruction tuning. arXiv preprint arXiv:2301.13688, 2023. | 2307.08701#45 | AlpaGasus: Training A Better Alpaca with Fewer Data | Large language models~(LLMs) strengthen instruction-following capability
through instruction-finetuning (IFT) on supervised instruction/response data.
However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly
contain many low-quality instances with incorrect or irrelevant responses,
which are misleading and detrimental to IFT. In this paper, we propose a simple
and effective data selection strategy that automatically identifies and filters
out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we
introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered
from the 52k Alpaca data. AlpaGasus significantly outperforms the original
Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human
evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM
(i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also
provides 5.7x faster training, reducing the training time for a 7B variant from
80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the
efficacy of our method across diverse datasets, base models, and LLM filters.
Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be
generally applied to instruction-tuning data, leading to faster training and
better instruction-following models. Our project page is available at:
\url{https://lichang-chen.github.io/AlpaGasus/} | http://arxiv.org/pdf/2307.08701 | Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin | cs.CL | 32 Pages; 29 Figures; 15 Tables | null | cs.CL | 20230717 | 20231104 | [
{
"id": "2302.13971"
},
{
"id": "2305.10403"
},
{
"id": "2210.10760"
},
{
"id": "2304.07327"
},
{
"id": "2009.03300"
},
{
"id": "2306.04757"
},
{
"id": "2110.02491"
},
{
"id": "2107.03374"
},
{
"id": "2303.10158"
},
{
"id": "2305.02423"
},
{
"id": "2211.09110"
},
{
"id": "2303.08119"
},
{
"id": "2210.11416"
},
{
"id": "2301.13688"
},
{
"id": "2212.08073"
},
{
"id": "2004.14602"
},
{
"id": "2110.03613"
},
{
"id": "2210.09261"
},
{
"id": "2112.00861"
},
{
"id": "2306.03082"
},
{
"id": "2305.14387"
},
{
"id": "2212.10560"
},
{
"id": "2305.02424"
},
{
"id": "2306.05685"
},
{
"id": "2305.17926"
},
{
"id": "2305.14233"
},
{
"id": "1907.11692"
},
{
"id": "2304.03277"
},
{
"id": "2305.11206"
}
] |
2307.08621 | 46 | [WPN+19] Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel R Bowman. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. arXiv preprint arXiv:1905.00537, 2019.
[ZHB+19] Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. Hellaswag: Can a machine really finish your sentence? In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 2019.
[ZYY+21] Ming Zhong, Da Yin, Tao Yu, Ahmad Zaidi, Mutethia Mutuma, Rahul Jha, Ahmed Has- san Awadallah, Asli Celikyilmaz, Yang Liu, Xipeng Qiu, et al. Qmsum: A new benchmark for query-based multi-domain meeting summarization. arXiv preprint arXiv:2104.05938, 2021.
13
# A Hyperparameters | 2307.08621#46 | Retentive Network: A Successor to Transformer for Large Language Models | In this work, we propose Retentive Network (RetNet) as a foundation
architecture for large language models, simultaneously achieving training
parallelism, low-cost inference, and good performance. We theoretically derive
the connection between recurrence and attention. Then we propose the retention
mechanism for sequence modeling, which supports three computation paradigms,
i.e., parallel, recurrent, and chunkwise recurrent. Specifically, the parallel
representation allows for training parallelism. The recurrent representation
enables low-cost $O(1)$ inference, which improves decoding throughput, latency,
and GPU memory without sacrificing performance. The chunkwise recurrent
representation facilitates efficient long-sequence modeling with linear
complexity, where each chunk is encoded parallelly while recurrently
summarizing the chunks. Experimental results on language modeling show that
RetNet achieves favorable scaling results, parallel training, low-cost
deployment, and efficient inference. The intriguing properties make RetNet a
strong successor to Transformer for large language models. Code will be
available at https://aka.ms/retnet. | http://arxiv.org/pdf/2307.08621 | Yutao Sun, Li Dong, Shaohan Huang, Shuming Ma, Yuqing Xia, Jilong Xue, Jianyong Wang, Furu Wei | cs.CL, cs.LG | null | null | cs.CL | 20230717 | 20230809 | [
{
"id": "2104.02112"
},
{
"id": "2101.00027"
},
{
"id": "2212.14052"
},
{
"id": "1905.00537"
},
{
"id": "2104.05938"
},
{
"id": "2104.09864"
},
{
"id": "2212.10554"
},
{
"id": "1909.08053"
},
{
"id": "2210.06423"
},
{
"id": "2201.03533"
},
{
"id": "2104.07091"
},
{
"id": "1607.06450"
},
{
"id": "2111.00396"
},
{
"id": "2302.10866"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.